Verdict
Hermes Agent on a Mac Mini M4 24 GB works, but you're paying Apple Silicon prices for a workload x86 mini PCs cover for less. The reason to do it: you want Hermes Agent AND meaningful local-LLM headroom on the same machine, in a desktop form factor, fanless-ish.
Setup notes
Docker Desktop for Mac (or OrbStack — leaner). Pull Hermes, configure as usual. Pair with Ollama on the host running Mistral 7B Q4 or Llama 3 8B Q4. The Hermes container talks to host-side Ollama via host.docker.internal.
Performance
Idle: 11W. Under load: 25W. Local Llama 3 8B Q4: 38 tok/s. Combined agent+LLM workload genuinely viable on the same desk machine.
What breaks
- Linux-specific tooling (you're on macOS)
- 70B local LLMs (need M4 Pro 48 GB)
Want to know more
See the full Hermes Agent review and the Mac Mini M4 / M4 Pro buyer's notes.