Aperture by Tailscale: Identity-Based AI Gateway for LLM Requests
1 min read
Originally from tailscale.com
View source
My notes
Summary
Tailscale launched Aperture, a centralized AI gateway (in alpha) that secures, monitors, and routes LLM requests across an organization using Tailscale’s identity layer instead of distributed API keys. It proxies to providers like OpenAI, Anthropic, and Google without requiring changes to existing tools, and bundles spending limits, user-level access control, and usage telemetry. Free during alpha (6 users included).
Key Insight
- Solves a real pain: every team using multiple LLMs ends up with API keys leaked in
.envfiles, no per-user attribution, no spend visibility. Aperture centralizes this behind existing WireGuard identity. - Key differentiator vs LiteLLM / Portkey / Helicone: no API keys to distribute. Tailscale identity = the auth. If someone leaves the company, you remove them from the tailnet and they lose LLM access automatically.
- Model-based routing means you can fail over between providers (e.g. send Claude traffic to OpenAI if Anthropic is down) transparently.
- 6 free users during alpha is aggressive, Tailscale is clearly betting AI gateway becomes a core part of their enterprise pitch (alongside their existing zero-trust networking).
- Target buyer is ops/security at mid-size companies where 20-50 people use LLM APIs and nobody knows what’s being spent or leaked.
- Missing (not mentioned): prompt caching, prompt injection filtering, PII redaction. Those are where Portkey/LiteLLM compete harder.