PocketClawvol. 1 · 2026
guide #109

How to choose a self-hosted AI agent in 2026 — a decision tree

Editorial noteThis article reports on a fast-moving space. Versions, install counts and timelines are accurate as of the “updated” date above. We re-verify against primary sources (CVE database, project repositories, vendor announcements) before each update. Send corrections to contact@pocketclaw.dev.

Problem
There are now 7+ credible self-hosted AI agent projects with different defaults, sandbox models and licensing. Most articles compare them in tables; few help you actually decide.

Solution
Six questions, in order, that filter the field down to the right agent for your specific situation. Each question links to the relevant deeper guide.

Six questions. In order. Each one cuts the field, until what's left is the agent you should install today. We aim for honesty over completeness — if the right answer is a managed-hosting service rather than self-hosting at all, we'll say so.

Q1: Does your data have to stay on your hardware, no exceptions?

If yes — regulated industry, GDPR data residency, classified material, strong personal preference — your answer is ZeroClaw (or ZeroClaw Lite for resource-constrained hosts). ZeroClaw runs entirely on local LLMs via Ollama, denies network egress at iptables level, and is AGPL-3.0-licensed so any commercialisation has to publish source. The hardware floor is real — 64 GB unified memory or a 24 GB GPU minimum for 70B-class models — but the data residency is genuine.

If you can let the LLM call leave the machine (cloud Claude, OpenAI, OpenRouter), continue.

Q2: Are you in a regulated industry with a CISO?

If yes — finance, healthcare, government, defence-adjacent — your answer is IronClaw. gVisor sandbox, hash-chained audit log, RBAC, SAML SSO, air-gap mode, active vulnerability bounty programme, formal threat model documentation. The license is source-available rather than OSI open source, and pricing starts at $750/seat/year. None of that is negotiable in regulated buying contexts; all of it is normal in those contexts.

If you don't have a compliance officer asking you these questions, continue.

Q3: Are you on macOS only, with an active Claude subscription?

If yes — Mac developer, all Mac users, Anthropic-only LLM strategy — your answer is NanoClaw. macOS-native installer, Apple container sandboxing (genuinely strong on Apple Silicon), sub-second boot, Claude SDK integration. Drop multi-LLM and Linux portability for a tighter integration on the platform you already live in.

If you'll need Linux portability, multi-LLM support, or Mac is just one of several environments, continue.

Q4: Do you want to read every line of code you run?

If yes — solo engineer, custom-agent foundation, deep verification workflows, anti-vendor-lock-in — your answer is Nanobot. 4,000 lines of Python, single file feel, no plugin marketplace, no telemetry, no auth layer. The audit is YOU, reading the code in an afternoon. Trade-off: no sandbox by design, single-user assumed, OpenAI-compatible LLM only (Anthropic via shim). Match this to your threat model first.

If you trust well-maintained sandboxed defaults more than your own code review, continue.

Q5: Do you want the deployment model where there is no server?

If yes — Cloudflare-native shop, low-volume edge agent, or you genuinely want to never SSH into a host again — your answer is Moltworker. Runs on Cloudflare Workers, free tier covers 100K requests/day, sub-ms cold starts, sandbox is the V8 isolate. Workers runtime forbids native binaries, browser automation and long-running processes — that rules out most heavy agent capabilities. Within those constraints, it's the cleanest small-deployment story in 2026.

If you need browser automation, native dependencies or sustained processes, continue.

Q6: Default safe, sandbox-on, multi-LLM, production-ready today?

That's Hermes Agent. Docker-sandboxed by default, explicit network and filesystem allowlists, mandatory approval flow for tool execution, multi-LLM support out of the box (Claude, GPT, Gemini, Llama via Ollama), threat model documented. ~22,000 GitHub stars in early 2026 and the project most install guides published since March 2026 lead with.

For new deployments without a specific reason to choose otherwise, this is the answer.

The OpenClaw question, separately

Notice that OpenClaw doesn't appear above as a top recommendation. It's not because OpenClaw is bad — post-2026.4, it's genuinely fine. It's because if you're starting fresh in 2026, Hermes Agent is the easier on-ramp and Nanobot is the cleaner audit path. OpenClaw 2026.4+ is the right answer if you have an existing install with custom plugins or team familiarity. Migrate-vs-stay is its own decision; we cover it in [the migration guide](/guides/migrate-openclaw-to-hermes).

The "should I self-host at all" question, even more separately

If your reading of the six questions above is "none of these fit me — I just want the bot to work and I don't want to do ops," that's a legitimate answer. Managed services like ClawRift, NitroClaw, ClawGo and DeployHermes exist because not everyone should self-host. $19-100 per month buys you the agent without the VPS, the Caddy config, the cron jobs, the security advisories or the sleep deprivation.

Self-hosting is a choice. It's not always the right one.

Quick map

If…Pick
Data can't leave your machineZeroClaw
Regulated industry, CISO requiredIronClaw
macOS-only + ClaudeNanoClaw
Verify every line of codeNanobot
Cloudflare Workers / serverlessMoltworker
Default safe new deploymentHermes Agent
Existing OpenClaw install with pluginsOpenClaw 2026.4+
Don't want ops at allManaged service (ClawRift/NitroClaw/etc.)

Then: hardware

Once you've picked the agent, the hardware decision is downstream. See [the hardware buyer's guide](/guides/edge-ai-hardware-2026). The short version:

  • First-time self-hoster, cheap: Raspberry Pi 5 (8 GB) + Hermes
  • Workhorse: €450 generic Intel mini PC + Hermes Agent full
  • Local-LLM 8B class: Mac Mini M4 24 GB + ZeroClaw. €1,099.
  • Local-LLM 70B class: Mac Mini M4 Pro 48 GB + ZeroClaw. €1,899.

The [agent-on-device combinations page](/agent-on-device) covers the realistic crossings with installed-and-tested verdicts.

Then: LLM strategy

Last decision. Three options:

  • Cloud LLM as primary: Claude (best agentic reliability) or GPT
  • Local LLM as fallback only: Ollama with Mistral 7B Q4 for cheap
  • Local LLM as primary: ZeroClaw + Llama 3.3 70B Q4 on Mac Mini

We have a complete [local LLM benchmark report](/guides/local-llms-benchmark-2026) if you want the numbers.

The thing nobody says

The "best" agent in 2026 is the one that's still maintained, still patched within 48 hours of disclosure, and still actively documented when something breaks at 11 PM. Project momentum matters more than benchmark scores. Hermes Agent has that momentum right now. OpenClaw post-foundation has it. NanoClaw, ZeroClaw, IronClaw and Nanobot have it within their niches. The fringes — small forks, new entrants without a security disclosure channel — don't.

Pick the answer the decision tree gives you. Then check that the project has shipped a release in the last 90 days, has a working security disclosure address, and has a public CVE record (zero CVEs is suspicious on a real install base — it usually means nobody's looking yet).

Then install it.

Related

  • [Self-hosted AI landscape 2026](/guides/self-hosted-ai-landscape-2026)
  • [Hardware buyer's guide](/guides/edge-ai-hardware-2026)
  • [Pocket AI complete guide](/guides/pocket-ai-complete-guide)
  • [Security playbook](/guides/self-hosted-ai-security-playbook-2026)
  • [OpenClaw alternatives](/guides/openclaw-alternatives-2026)
Continue reading
guide
Pocket AI complete guide
Running self-hosted AI on portable hardware
guide
Edge AI hardware buyer's guide 2026
Pi 5 vs Mini PC vs Mac Mini
report
Self-hosted AI landscape 2026
Quarterly state of the ecosystem
section
Pocket AI hardware hub
All portable hosts reviewed
section
Agent tracker
Live stats on every agent
newsletter
Thursday digest
Weekly summary in your inbox