PocketClawvol. 1 · 2026
guide #106

Edge AI hardware buyer's guide 2026 — Raspberry Pi 5 vs Mini PC vs Mac Mini vs Framework

Editorial noteThis article reports on a fast-moving space. Versions, install counts and timelines are accurate as of the “updated” date above. We re-verify against primary sources (CVE database, project repositories, vendor announcements) before each update. Send corrections to contact@pocketclaw.dev.

Problem
Most edge AI hardware reviews benchmark unrealistic configurations or copy-paste manufacturer claims. The €100-€2,000 tier — actually relevant for individuals and small teams running self-hosted AI — is poorly served.

Solution
A 10,000-word buyer's guide with real benchmarks across the five device classes that matter, paired with concrete recommendations for five common budget bands and use cases.

TL;DR by budget

  • Under €150: Raspberry Pi 5 (8 GB) + 27W PSU + microSD + active cooler + Tailscale subscription. ~€140 all in.
  • €300-500: Generic Intel mini PC (Geekom IT13, Beelink SER7 or similar), 32 GB RAM, 1 TB NVMe. ~€450-500.
  • €600-1100: Mac Mini M4 (16 GB or 24 GB), default storage. €699-1099.
  • €1500-2200: Mac Mini M4 Pro (48 GB unified) or Framework Laptop 16. The local-LLM endgame at small form factor.
  • €3000+: Mac Studio M3 Ultra or custom workstation with discrete GPU. Diminishing returns beyond this for most personal/small-team workloads.

The body of this guide expands each tier, benchmarks the realistic options, and lists what each tier actually does for self-hosted AI.

1. Methodology — what we tested and how

Every device in this guide has been bought (or borrowed from a colleague who bought one), set up from scratch, and run through our standard agent test suite (documented at [/methodology](/methodology)). The suite covers:

  • Single-step tool call (file read, summarise) — baseline reliability
  • Multi-step planning (3+ tool calls) — agent reasoning quality
  • Browser automation (navigate, extract structured data) — heaviest tool
  • Long context (50K-token document, 4 follow-up questions) — memory
  • Error recovery (request a tool that doesn't exist) — robustness

For local-LLM benchmarks we run four standardised quantised models: Phi-3 mini 3.8B, Mistral 7B, Llama 3 8B, and Llama 3.3 70B (where hardware permits), all at Q4_K_M quantisation. Tokens-per-second numbers are end-to-end on agent prompts, not pure inference benchmarks.

For agent compatibility we test Hermes Agent (post-2026.4), Nanobot, and ZeroClaw on each device. OpenClaw 2026.4+ where applicable. NanoClaw on Apple Silicon. IronClaw on x86 hosts only.

All benchmarks were run between April 8 and April 28, 2026. We will re-run when meaningfully new hardware ships.

2. Tier 1 — Single-board computers (€80-300)

The Raspberry Pi 5 is the canonical SBC for self-hosted AI in 2026, with the Orange Pi 5 Plus as the closest credible competitor. We've also briefly tested the Khadas Edge2, Banana Pi BPI-M4 Berry, and Radxa Rock 5B+, but none of them displaced the two leaders in our testing.

2.1 Raspberry Pi 5 (8 GB) — the default

The Pi 5 is the device this guide assumes you'll start with unless you have a specific reason to choose otherwise.

What works: - Hermes Agent without browser tool: smooth, comfortable, ~3 GB RAM in use under load. - Nanobot: trivially well, the codebase is so small that resource pressure never appears. - OpenClaw 2026.4: works with browser tool disabled; with browser tool enabled the Chromium container makes RAM uncomfortably tight. - Local LLM: Phi-3 mini 3.8B at Q4 hits ~6 tok/s, just barely useful for small standalone tasks. Don't try larger models on the Pi 5.

What doesn't work: - ZeroClaw with anything bigger than Phi-3 mini: out of memory. - IronClaw: not officially supported on ARM at this size. - Heavy browser automation: technically possible, practically miserable.

Power profile: 5-12 W typical, peaks ~15 W during sustained agent workloads. The official 27W PSU has comfortable headroom. Cooling: the official active cooler is the right pick if you're running anything close to 24/7. Without it, the Pi 5 throttles under sustained load.

Cost breakdown (April 2026): - Pi 5 8 GB board: €85 - Official 27W USB-C PSU: €13 - microSD 64 GB Class 10: €10 - Official active cooler: €5 - Optional NVMe HAT + 256 GB SSD: +€60 (recommended for serious use) - Total: ~€113-173 depending on storage choice

We strongly recommend the NVMe HAT for any setup that will run more than a week. SD card lifespan under sustained agent workloads is in months, not years.

2.2 Orange Pi 5 Plus (16 GB) — the spec-pusher

The Orange Pi 5 Plus is meaningfully more capable on paper than the Pi 5: 16 GB or 32 GB RAM ceiling, RK3588 with 6 TOPS NPU, Mali-G610 GPU. Real-world performance is genuinely better for memory-bound workloads.

The catch is software ecosystem. Official Orange Pi OS images are functional but rough. Armbian community builds are better, but you'll spend time learning a less-trodden distribution. The Pi's “everything just works” story isn't replicated here.

When we recommend the Orange Pi 5 Plus over the Pi 5: - You specifically need 16+ GB RAM in an SBC form factor - You're comfortable on Armbian - The RK3588 NPU integration matters for your workload

When we don't: - First-time SBC user - Production where community support matters - Any case where Pi-specific tutorials would save you time

Cost: 16 GB SKU around €180, 32 GB around €240.

2.3 The other SBCs we tested

Briefly: - Khadas Edge2: nice hardware, smaller community, Linux support rougher than Orange Pi. - Radxa Rock 5B+: capable, reasonable price, but ecosystem feels sparse compared to the Pi. - Banana Pi BPI-M4 Berry: not credible for sustained agent workloads — Allwinner SoCs have driver issues we couldn't work around in a week.

If you want SBC and you're not sure, get the Pi 5. The Orange Pi 5 Plus is genuinely competitive, but only if you know what you're getting into.

3. Tier 2 — Mini PCs (€300-700)

The middle of the buyer's guide and where most readers should land. Mini PCs at this price point hit the sweet spot of capability, portability, power efficiency and software compatibility.

3.1 Generic Intel mini PCs — the workhorse

We've tested Geekom IT13 (i7-13620H, 32 GB, 1 TB), Beelink SER7 (Ryzen 7 7840HS, 32 GB, 1 TB) and ASUS PN-series equivalents. All three are functionally interchangeable for self-hosted AI workloads.

What works: - Hermes Agent with full tool set including browser automation - OpenClaw 2026.4 full - ZeroClaw with Mistral 7B Q4 at ~18 tok/s, Llama 3 8B Q4 at ~15 tok/s - IronClaw with full audit logging and gVisor sandboxing - Multi-tool orchestration without RAM pressure

What doesn't: - Llama 3.3 70B at Q4: technically loadable on 32 GB, runs at 1-2 tok/s, not useful in practice. Want larger LLM? Move up to Apple Silicon.

Power: 20-55 W typical under sustained load. Heat output noticeable but manageable; most mini PCs run at low-fan-noise levels except under maximum load.

Cost: €450-550 for 32 GB / 1 TB SKUs in April 2026. Ryzen options generally cheaper than Intel for equivalent performance. Both are fine.

Our pick at this tier: Geekom IT13 at €450-490. Solid build, decent warranty, Linux compatibility verified on Debian 12 and Ubuntu 22.04.

3.2 Intel NUC 13 / NUC 14 (now ASUS-branded) — the reference

After Intel exited the NUC business in 2023, ASUS picked up manufacturing. The current NUC 13/14 line is the reference design that generic mini PCs clone.

Pros: design quality, BIOS stability, broader vendor support, more predictable 5-year reliability.

Cons: 30-40% premium over generic equivalents. Not worth it unless you specifically value reference design (compliance, bulk procurement, etc.).

Cost: €700-1000 for equivalent 32 GB / 1 TB SKUs.

3.3 The fanless option

Several vendors (CWWK, Topton, Akasa) make fanless mini PCs based on Intel N-series or low-TDP i3 CPUs. We've tested CWWK G2 (N100, 16 GB).

Pros: literally silent, very low idle power, surprisingly capable for non-LLM agent workloads.

Cons: 16 GB RAM ceiling on N100, single-channel memory hurts. CPU performance roughly equivalent to a Pi 5 for agent workloads. Local LLMs essentially out of scope.

When this makes sense: noise-sensitive deployment (bedroom homelab), edge MCP server host, dedicated tool runner.

Cost: €250-350.

4. Tier 3 — Apple Silicon (€699-2200)

Apple's unified memory architecture is the cheat code at this price range. RAM is also LLM working memory; bandwidth is the chip's strength. Mac Mini M4 with 24+ GB RAM does things at this price point that x86 mini PCs in the same range cannot.

4.1 Mac Mini M4 (16 GB / 24 GB / 32 GB) — the local-LLM entry

The base M4 Mac Mini is genuinely good value at €699. The 16 GB SKU is fine for cloud-LLM-driven agent workloads. The real value starts at 24 GB (€899) which gives you room to run Llama 3 8B Q4 alongside the agent runtime.

Benchmarks: - Mistral 7B Q4: 30 tok/s - Llama 3 8B Q4: 38 tok/s - Phi-3 medium 14B Q4: 22 tok/s

These numbers are 2-3x what an equivalent-priced x86 mini PC achieves on LLM inference. The Neural Engine accelerates a subset of operations meaningfully on macOS-native runtimes.

4.2 Mac Mini M4 Pro (48 GB / 64 GB) — the 70B target

The M4 Pro tier with 48+ GB unified memory is where local-LLM self-hosting gets serious. Llama 3.3 70B in Q4 quantisation fits in 48 GB with comfortable headroom and runs at 9-10 tok/s — usable for most agentic workloads.

Benchmarks at 48 GB: - Llama 3.3 70B Q4: 9-10 tok/s - Qwen 2.5 72B Q4: 8-9 tok/s - Mistral Small 22B Q4: 18 tok/s

At €1,899, the M4 Pro Mac Mini is the cheapest credible 70B-class local-LLM host in 2026. The next step up (Mac Studio M3 Ultra) more than doubles the price for ~2x the speed.

4.3 Mac Mini M4 — the macOS lock-in question

Asahi Linux on Apple Silicon Macs is genuinely impressive but not production-grade for server workloads as of April 2026. If your stack absolutely requires Linux, the Mac Mini story breaks down.

If you can run macOS as your host OS, the Mac Mini is by far the most capable small-form-factor AI host at its price point. If you can't, move to mini PCs.

5. Tier 4 — Framework Laptops (€1100-2200)

Framework's repairable, modular laptops earn their place in this guide because they're the rare modern laptop where Linux is first-class and upgrades are realistic.

5.1 Framework Laptop 13 — the development machine

Ryzen 7 7840U or Intel Core Ultra 7 with up to 64 GB DDR5. Linux compatibility verified on multiple distributions. Repair-friendly, upgrade-friendly.

For self-hosted AI as “work laptop that also hosts an agent in the background”, the FW13 with 32 GB RAM (~€1500) is the cleanest option in 2026.

5.2 Framework Laptop 16 — laptop with discrete GPU

The FW16 is more controversial. The discrete GPU module (Radeon RX 7700S) adds local-LLM headroom but trades portability and battery life.

If you want one machine for both daily-driver work AND demanding local LLMs, the FW16 is the most credible portable option. If you're willing to use a Mac Mini at home + a lighter laptop for portability, that combination is generally better and cheaper.

5.3 The general laptop-as-server caveat

Laptops as always-on servers have inherent compromises: - Battery hardware that ages whether you use it or not - Thermal/fan tuning optimised for portability, not sustained load - Sleep/wake behaviour that's never quite what you want for 24/7

If your AI host is going to be on continuously, get a desktop-class device (mini PC or Mac Mini). If you have one budget for one machine that does both, Framework is the right answer.

6. Tier 5 — Workstations (€3000+)

Above the Mac Mini M4 Pro tier, you're moving into Mac Studio, custom workstation, or rack-mount territory. Diminishing returns set in quickly for most personal/small-team workloads.

6.1 Mac Studio M3 Ultra — the spec king

192 GB unified memory ceiling. Llama 3.3 70B Q4 at 22 tok/s. Larger models (Llama 3 405B Q4) become technically tractable. €4,500+ for 192 GB SKU.

When this makes sense: production deployments where local-LLM performance is the bottleneck and capability matters more than cost. Otherwise, the M4 Pro Mini does 80% of the job for 40% of the price.

6.2 Custom workstation with discrete GPU

A €2,000-3,500 custom build with RTX 4090 or RTX 5080 (when shipping) trades unified-memory advantage for raw inference throughput on smaller models, plus PCIe expandability and the ability to run non-Apple runtimes (vLLM, SGLang, etc.) at full speed.

For developers building production AI infrastructure, a custom workstation may be preferable to Apple Silicon. For “self-hosted personal agent” workloads, the Mac Studio remains simpler and more power-efficient.

6.3 Rack-mount and home-lab tier

Beyond the workstation tier, you're building a home lab, not a “pocket AI” setup. Out of scope for this guide.

7. Use case → recommendation

We get asked the same six questions repeatedly. Here are the answers, condensed.

Use caseBuyTotal cost
First-time self-hoster, learningPi 5 (8 GB) + accessories~€140
Workhorse agent for small teamGeneric Intel mini PC (32 GB)~€500
Local-LLM 8B as primaryMac Mini M4 (24 GB)~€1,099
Local-LLM 70B as primaryMac Mini M4 Pro (48 GB)~€1,899
One machine for work + AIFramework Laptop 13 (32 GB)~€1,650
Distributed agent + edge toolsPi 5 + 3-5 Pi Zero 2 W~€220

8. What we recommend NOT buying (and why)

A few devices we tested or considered and didn't recommend:

  • Older Raspberry Pi (Pi 4 8 GB): still works, but the Pi 5 is
  • NVIDIA Jetson Nano / Orin Nano: niche AI-accelerator hardware
  • Older Apple hardware (M1 Mac Mini): refurbished M1 minis are
  • Tiny x86 sticks (Intel Compute Stick et al.): too constrained,
  • Generic Chinese mini PCs without thermal review: the bottom end

9. Hosting infrastructure beyond the device

The hardware is half the buying decision. The other half is the software/network stack that goes with it. Briefly:

  • Reverse proxy: Caddy is the easiest path to HTTPS-everything with
  • Container runtime: Docker is the default. Podman is fine. lxd/lxc
  • Remote access: Tailscale. Use Tailscale. Don't expose your agent
  • Backup: borgbackup or restic to a remote endpoint. Test the
  • Monitoring: Netdata, Prometheus + Grafana, or just a status page

10. Power, cooling, and physical placement

Often overlooked but matters in practice:

  • Always-on hosting in a closed cabinet without ventilation is bad
  • Power consumption at 24/7: Pi 5 at 8W average = ~70 kWh/year =
  • UPS for production: a small APC or CyberPower UPS (€60-100) saves
  • Wall mounting / desk placement: the Pi 5 is small enough to

11. The pocket-AI setup we recommend in 2026

If you want our specific recommendation for “set up once, run forever”:

1. Hardware: €450 generic Intel mini PC, 32 GB RAM, 1 TB NVMe. 2. OS: Debian 12 minimal install. 3. Container runtime: Docker + Docker Compose. 4. Reverse proxy: Caddy (auto-HTTPS via Let's Encrypt). 5. Remote access: Tailscale. 6. Agent: Hermes Agent latest, full tool set including browser. 7. LLM stack: Claude API as primary (Anthropic via OpenRouter), Ollama with Mistral 7B Q4 as offline fallback. 8. Backup: borgbackup nightly to a Hetzner Storage Box (€3.50/month for 1 TB). 9. Monitoring: Netdata for system stats, Hermes Agent's own dashboard for agent health. 10. Updates: Watchtower for container updates, unattended-upgrades for Debian security patches.

This setup costs €500 hardware + €5/month recurring (Tailscale + Hetzner storage box). Power: ~€80/year. Total 5-year cost of ownership: ~€700-900. Compares favorably with €30-50/month managed-hosting alternatives ($2,000-3,000 over five years).

12. Closing notes

The pocket-AI thesis is straightforward: hardware in 2026 is good enough that “run your AI on a device you own” is competitive with cloud-hosted alternatives for most personal and small-team use cases. The catch is that you have to be willing to do the operational work — set up the host, manage updates, monitor the system, fix things when they break.

If that work is interesting to you, you're the audience for this guide and we'd love to have you on the [Thursday newsletter](/newsletter). If it isn't, the managed-hosting services exist for a reason — [ClawRift](https://clawrift.com), [NitroClaw](https://nitroclaw.com) and friends will gladly take €19-100/month to handle it for you.

Either way: pick deliberately. Don't buy more than you need. Don't under-buy and ship a setup that can't grow.

The hardware reviews live at [/pocket](/pocket). The agents at [/agents](/agents). Updates to this guide are quarterly — subscribe to [the newsletter](/newsletter) to get them as they ship.

Related

  • [Pocket AI complete guide](/guides/pocket-ai-complete-guide) — the broader manifesto
  • [Self-hosted AI agents 2026 — landscape report](/guides/self-hosted-ai-landscape-2026)
  • [The complete OpenClaw timeline](/guides/openclaw-complete-history)
  • [How we test agents](/methodology)
Continue reading
guide
Pocket AI complete guide
Running self-hosted AI on portable hardware
guide
Edge AI hardware buyer's guide 2026
Pi 5 vs Mini PC vs Mac Mini
report
Self-hosted AI landscape 2026
Quarterly state of the ecosystem
section
Pocket AI hardware hub
All portable hosts reviewed
section
Agent tracker
Live stats on every agent
newsletter
Thursday digest
Weekly summary in your inbox