cook - portable terminal AI agent

Source

Summary

cook is a single-binary terminal AI agent that works with any LLM provider (OpenAI, Anthropic, Google, Groq, Vercel AI Gateway) via environment variables. It integrates with Unix pipes and scripts, requires no runtime (Node/Python), and supports reusable prompt templates stored as .md files - compatible with Cursor and Claude command formats.

Key Insight

The interesting angle here is the “prompt as file” pattern: saving prompts as .md files and running them with cook /deploy or cook /create-pr. This turns repeatable AI workflows into version-controlled, shareable command aliases. Combined with pipe support (cat server.log | cook "find root cause"), it bridges the gap between ad-hoc AI queries and scripted automation - useful for cron jobs, CI/CD pipelines, or any context where you need LLM reasoning without an interactive session.

The multi-agent config (--agent) lets you define different model profiles (e.g., a “fast” agent using a cheap model for quick lookups, a “deep” agent for complex reasoning) and switch per invocation. The --dry-run flag previews all file writes and destructive commands before execution, which is a sensible safety mechanism for automated workflows.

No config file is required - just set an API key env var and it auto-selects the provider/model. This makes it trivially deployable on servers or in containers.