Setup tutorials
Step by step.
Concrete tutorials with the actual shell commands. From bare metal to working agent. Each one tested by us at least once.
Hermes Agent on a Raspberry Pi 5
End-to-end install of Hermes Agent on a fresh Raspberry Pi 5 (8 GB), accessed via Tailscale, with Claude as primary LLM.
Tailscale for self-hosted AI dashboards
Set up Tailscale to access your agent dashboard from anywhere without exposing it on the public internet.
Ollama + Phi-3 mini on a Raspberry Pi 5
Install Ollama and the smallest credible local LLM (Phi-3 mini 3.8B Q4) on a Raspberry Pi 5. Useful as fallback or for narrow tasks.
Caddy reverse proxy with HTTPS for a self-hosted AI dashboard
Front your agent dashboard with Caddy on port 443 with automatic HTTPS via Let's Encrypt — no certbot, no nginx config.
Migrate OpenClaw to Hermes Agent — step by step
Concrete migration path with shell commands. Snapshot OpenClaw, install Hermes, port tools, migrate credentials, smoke test.
Ollama on a Mac Mini M4
End-to-end install of Ollama on the Mac Mini M4 16 GB, optimised for sustained inference. Includes thermal config, launchd persistence, and basic exposure via Tailscale.
Nginx reverse proxy in front of a self-hosted AI agent
TLS termination, rate limiting, and basic abuse protection in front of OpenClaw or Hermes Agent using Nginx and Let's Encrypt. €0/year, 30 minutes.
Self-hosted AI Docker Compose stack
A full self-hosted AI stack as one docker-compose file: Hermes Agent + Ollama + Qdrant + Caddy. Bring up the whole rig in two commands.
Self-hosted document RAG on a €25 VPS
End-to-end document Q&A pipeline using Hermes Agent + Qdrant + bge-large-en-v1.5 embeddings. PDFs in, citations out. No OpenAI dependency.