LIVE TAPE
OpenClaw 88,412 stars·CVE-2026-25898 disclosed (HIGH, Hermes)·Hermes Agent v2026.4.7 published·Hermes Agent +182 stars (last hour)·OpenClaw v2026.4.6 — credential vault hardening·CVE-2026-26133 patched (NanoClaw)·Pi 5 16GB rumoured for Q3 — recheck guidance·Nanobot +47 stars (last hour)·ZeroClaw v0.4.2 — Apple container fixes·Mac Mini M4 wins quarterly hardware survey·OpenClaw 88,412 stars·CVE-2026-25898 disclosed (HIGH, Hermes)·Hermes Agent v2026.4.7 published·Hermes Agent +182 stars (last hour)·OpenClaw v2026.4.6 — credential vault hardening·CVE-2026-26133 patched (NanoClaw)·Pi 5 16GB rumoured for Q3 — recheck guidance·Nanobot +47 stars (last hour)·ZeroClaw v0.4.2 — Apple container fixes·Mac Mini M4 wins quarterly hardware survey·
PocketClawvol. 1 · 2026

ollama.

Everything we've published on ollama across guides, agents, hardware reviews and glossary entries — 7 entries in total.

Guides (2)

Agents (1)

  • ZeroClaw

    Privacy-first. Local LLMs only. Network egress denied at iptables. AGPL-3.0.

Hardware (1)

  • Intel NUC 13 / Mini PC

    Mini PCs at €300–600 with i5/i7 + 16–32 GB RAM. The sweet spot for self-hosted AI agents that need browser automation and decent local model performance.

Glossary (3)

  • Ollama Local LLM runtime that exposes an OpenAI-compatible API over local model weights.
  • llama.cpp C++ inference engine for LLMs. The runtime under most local-LLM setups, including Ollama.
  • GGUF File format for storing quantised LLM weights, used by llama.cpp and Ollama.