ps.
Everything we've published on ps across guides, agents, hardware reviews and glossary entries — 16 entries in total.
Guides (1)
- API Keys Out of ps OutputDirty COW · 2026-02-18
Move API keys from command-line arguments to a chmod 600 env file, hiding them from ps output.
Agents (5)
- OpenClaw
The original viral self-hosted AI agent. Post-crisis 2026.4 line is genuinely safer; pre-2026.3 is genuinely dangerous.
- NanoClaw
macOS-only opinionated fork. Apple containers + Claude. Sub-second boot.
- Moltworker
Self-hosted AI agent on Cloudflare Workers. Free at low volume. Workers runtime constraints apply.
- DeployHermes
First managed-Hermes-Agent service. $19/month for a hosted Hermes deployment with dashboard, SSO and CVE monitoring.
- ZeroClaw Lite
Stripped-down ZeroClaw fork for resource-constrained hosts. Phi-3 mini default, runs comfortably on a Pi 5.
Hardware (6)
- Raspberry Pi 5
The default starting point for pocket AI in 2026. 4–8 GB of LPDDR4X, ARM Cortex-A76, sub-€100, runs Hermes Agent (no browser tool) or Nanobot comfortably.
- Mac Mini M4 / M4 Pro
The single best small-form-factor host for local LLMs in 2026. Apple Silicon unified memory makes 70B-class models tractable on a desk-sized machine.
- Framework Laptop 13 / 16
Repairable, modular laptop with strong Linux support. The right choice if you want a portable agent host that doubles as a daily driver.
- Orange Pi 5 Plus
Rockchip RK3588-based SBC with 4–32 GB RAM and an NPU. The Pi 5's most credible competitor for AI workloads on ARM.
- Minisforum UM790 Pro
Ryzen 9 7940HS mini PC. 32–64 GB RAM. The best Linux mini PC value at €700-900.
- Dell OptiPlex 3070 / 5070 Micro (used)
€100–180 used Intel mini PC. 8th gen i5, 8–16 GB RAM, NVMe. The cheapest credible mini PC for self-hosted AI.
Glossary (4)
- VPS — Virtual private server — a virtualised slice of a physical server, typically rented by the month.
- Hetzner — German VPS and dedicated server provider. The cheapest credible EU option for self-hosting in 2026.
- Caddy — Reverse proxy server with automatic HTTPS via Let's Encrypt. The easiest path to TLS on a self-hosted setup.
- llama.cpp — C++ inference engine for LLMs. The runtime under most local-LLM setups, including Ollama.