LIVE TAPE
OpenClaw 88,412 stars·CVE-2026-25898 disclosed (HIGH, Hermes)·Hermes Agent v2026.4.7 published·Hermes Agent +182 stars (last hour)·OpenClaw v2026.4.6 — credential vault hardening·CVE-2026-26133 patched (NanoClaw)·Pi 5 16GB rumoured for Q3 — recheck guidance·Nanobot +47 stars (last hour)·ZeroClaw v0.4.2 — Apple container fixes·Mac Mini M4 wins quarterly hardware survey·OpenClaw 88,412 stars·CVE-2026-25898 disclosed (HIGH, Hermes)·Hermes Agent v2026.4.7 published·Hermes Agent +182 stars (last hour)·OpenClaw v2026.4.6 — credential vault hardening·CVE-2026-26133 patched (NanoClaw)·Pi 5 16GB rumoured for Q3 — recheck guidance·Nanobot +47 stars (last hour)·ZeroClaw v0.4.2 — Apple container fixes·Mac Mini M4 wins quarterly hardware survey·
PocketClawvol. 1 · 2026

llama.

Everything we've published on llama across guides, agents, hardware reviews and glossary entries — 14 entries in total.

Guides (1)

Agents (1)

  • ZeroClaw

    Privacy-first. Local LLMs only. Network egress denied at iptables. AGPL-3.0.

Hardware (7)

  • Raspberry Pi 5

    The default starting point for pocket AI in 2026. 4–8 GB of LPDDR4X, ARM Cortex-A76, sub-€100, runs Hermes Agent (no browser tool) or Nanobot comfortably.

  • Intel NUC 13 / Mini PC

    Mini PCs at €300–600 with i5/i7 + 16–32 GB RAM. The sweet spot for self-hosted AI agents that need browser automation and decent local model performance.

  • Mac Mini M4 / M4 Pro

    The single best small-form-factor host for local LLMs in 2026. Apple Silicon unified memory makes 70B-class models tractable on a desk-sized machine.

  • Geekom IT13 / generic Intel mini PC

    Sub-€500 mini PC with i7-13620H, 32 GB RAM, 1 TB SSD. The pragmatic alternative to the Intel NUC.

  • Mac Studio M3 Ultra

    192 GB unified memory ceiling. The local-LLM workstation. Llama 3.3 70B at 22 tok/s. €4,500+.

  • MacBook Air M3 / M4

    Fanless laptop with up to 24 GB unified memory. Runs Mistral 7B Q4 silently on the train. €1,299+.

  • Minisforum UM790 Pro

    Ryzen 9 7940HS mini PC. 32–64 GB RAM. The best Linux mini PC value at €700-900.

Glossary (5)

  • Local LLM Language model running on local hardware rather than via cloud API. Llama, Qwen, Mistral local variants.
  • Ollama Local LLM runtime that exposes an OpenAI-compatible API over local model weights.
  • Llama Meta's family of open-weight large language models. Llama 3.3 70B is the leading open frontier model in 2026.
  • llama.cpp C++ inference engine for LLMs. The runtime under most local-LLM setups, including Ollama.
  • GGUF File format for storing quantised LLM weights, used by llama.cpp and Ollama.