Verdict
Nanobot on a 32 GB mini PC is the verifiable-self-hosted setup. The 4,000 lines of Python you can read in an afternoon, on hardware that runs them with massive headroom. Pair with Ollama for local-LLM-backed agentic tasks. This is what 'I trust nothing and verify everything' looks like operationally.
Setup notes
Python 3.11+, virtualenv, pip install Nanobot. Read the codebase before deploying. Add Ollama with Mistral 7B Q4. Wire Nanobot to point at Ollama as its OpenAI-compatible LLM endpoint.
Performance
Nanobot itself uses <100 MB RAM. The constraint is the LLM endpoint. Mistral 7B Q4 on this hardware: 18 tok/s, plenty for personal use.
What breaks
- Multi-user (Nanobot is single-user by design)
- Anyone needing the sandbox guarantees Nanobot deliberately doesn't ship
Want to know more
See the full Nanobot review and the Intel NUC 13 / Mini PC buyer's notes.