Darkbloom: Private AI Inference on Apple Silicon
2 min read
Originally from darkbloom.dev
View source
My notes
Summary
Darkbloom is a decentralized inference network that routes encrypted AI requests to idle Apple Silicon Macs, positioned as the “Airbnb of GPU compute”. It advertises ~50% lower prices than OpenRouter, 100% of revenue to node operators (minus a platform fee), and an OpenAI-compatible API backed by hardware attestation so operators literally cannot see user prompts.
Key Insight
- Pricing undercut is concrete, not vibes. Per-million-token table: Gemma 4 26B at $0.03 in / $0.20 out vs OpenRouter $0.40 (50% off). Qwen3.5 27B dense $0.10 / $0.78. Qwen3.5 122B MoE $0.13 / $1.04. MiniMax M2.5 239B at $0.06 / $0.50. Consistent ~50% discount across the board.
- Trust model is the actual product. Four stacked layers make “don’t trust the operator” enforceable: (1) client-side encryption, (2) keys generated inside Apple’s Secure Enclave with attestation chaining to Apple’s root CA, (3) hardened runtime blocking debugger attach and memory inspection, (4) every response signed by the specific machine with a public attestation chain. This is the difference between “decentralized inference” as a pitch and something an enterprise would actually send prompts to.
- Operator economics are real but bounded. Claim: electricity $0.01-0.03/hour on Apple Silicon, 100% of inference revenue to operator. Real earnings depend on (a) network demand and (b) which models are hosted, a Mac Studio capable of 239B MoE will earn very differently from an M1 MacBook Air. “100% of revenue” marketing carves out a separate “platform fee” row, so the true operator split is below 100%.
- Hardware claim worth noting. Unified memory 273-819 GB/s bandwidth, Neural Engine, “machines capable of running 235B-parameter models”, positions Apple Silicon specifically (not generic consumer GPUs) as the competitive edge vs NVIDIA-centric hyperscalers.
- Distribution hack. One-line install:
curl -fsSL https://api.darkbloom.dev/install.sh | bashdrops a launchd service. Same friction floor as Ollama/Homebrew, the earning side is designed to onboard in under a minute. - Strategic pattern. Airbnb to idle rooms, Uber to idle cars, rooftop solar to idle roofs, Darkbloom to idle Apple Silicon. Each undercut the incumbent because marginal cost of the distributed asset is near zero. Follows the same economic logic; the open question is whether AI inference demand will be durable on shared consumer hardware or get pulled back behind private datacenter walls for latency/SLA reasons.
- Built by Eigen Labs. Same team behind EigenLayer (restaking). Paper hosted under Layr-Labs GitHub (
papers/dginf-private-inference.pdf). This isn’t a side project, it’s the crypto-infra team applying attestation primitives to AI.