PocketClawvol. 1 · 2026
★★★★★affiliate disclosed

OpenRouter

LLM gateway with unified API across 100+ providers. Our default for cost-optimisation and provider fallback.

Affiliate disclosureRevenue share on referred paid usage — see /disclosure We use them ourselves and would recommend regardless. Full list at /disclosure.

What it is

OpenRouter is the multi-provider gateway we configure as default in our test deployments. Genuine cost-saver in mixed-provider workloads.

Why we use it

  • Single API endpoint covering Anthropic, OpenAI, Google, Mistral, Llama and more
  • Automatic provider fallback when a primary is rate-limited or down
  • Cost transparency — see exactly what each provider charges
  • OpenAI-compatible endpoint works with most agent frameworks

Why we wouldn't

  • Slight latency hit vs direct provider calls
  • Some providers expose features through OpenRouter with lag

Best for

  • Cost-sensitive deployments with usage-based pricing
  • Multi-provider fallback strategies
  • Quick A/B testing across providers

Not for

  • Workloads where direct provider relationship is required for compliance

Long review

OpenRouter is our default LLM gateway for self-hosted agent testing. The single OpenAI-compatible endpoint covering 100+ models from 30+ providers solves a real problem: agent frameworks support &ldquo;OpenAI-compatible&rdquo; out of the box, and OpenRouter speaks that protocol while routing to whichever provider you actually want. The provider-fallback feature is the killer feature in production — when Claude rate-limits, OpenRouter falls back to GPT or Llama without code changes. Pricing is at-cost from upstream with a small markup, transparent enough that you can see what each call would have cost direct. The latency hit (extra hop through OpenRouter) is real but typically <50ms. Our affiliate disclosure: yes, we have a referral arrangement with OpenRouter. We use them in our own deployments and recommended them long before the affiliate existed. Full disclosure at /disclosure.

Alternatives we've tested

  • Anthropic (Claude API)The LLM provider we default to for self-hosted agent workloads. Claude 4.5 Sonnet remains the best agentic model in 2026.
  • OpenAI (GPT API)GPT-5 is competitive with Claude on many tasks. The API is the most mature in the market; the agent behaviour is more variable.
Related providers
llm
Anthropic (Claude API)
The LLM provider we default to for self-hosted agent workloads. Claude 4.5 Sonnet remains the best a…
llm
OpenAI (GPT API)
GPT-5 is competitive with Claude on many tasks. The API is the most mature in the market; the agent …