Specs at a glance
| CPU | Broadcom BCM2712 — 4× Cortex-A76 @ 2.4 GHz |
| RAM options | 4 / 8 GB LPDDR4X |
| Storage | microSD or NVMe via M.2 HAT |
| Power draw | 5–12 W |
| Form factor | 85 × 56 mm credit-card SBC |
| Local LLM capability | Small models only (≤ 3B Q4) |
| Agent score | 7/10 |
| Price point | €80–110 |
Overview
The Raspberry Pi 5 is the de-facto first device for pocket AI in 2026. The 8 GB SKU has enough RAM for Hermes Agent without browser automation, or for Nanobot with everything. NVMe via the official M.2 HAT removes the SD card bottleneck. Power draw is comfortably under 12W under load — runs cool, runs silent, runs forever. Local LLM inference is the limit: realistic ceiling is 3B-parameter quantised models like Phi-3 or TinyLlama, which trail Claude meaningfully on agentic tasks. For agents that delegate the LLM call to a cloud API, the Pi 5 is genuinely competitive with €30/month VPS plans.
Best for
- First-time self-hosters wanting low power + always-on
- Hermes Agent without browser tool
- Nanobot deployments
- MCP server hosts
Not for
- Local LLM inference beyond ~3B param Q4 quants
- Browser automation (Chromium too heavy)
- Multi-user agent workloads
Compatible self-hosted agents
Tested working on Raspberry Pi 5 (with the caveats from “Best for” / “Not for” above):
Where to buy
Manufacturer page: https://www.raspberrypi.com/products/raspberry-pi-5/. We don't have an active affiliate programme with this vendor — see our disclosure page for the full list of partners we do work with.
See: all pocket AI hardware · edge AI hardware buyer's guide · how we test.