# Claude's 1M Context Window Is GA at Standard Pricing

> Claude Opus 4.6 and Sonnet 4.6 now offer 1M token context at standard pricing, with no long-context premium and improved retrieval accuracy.

Published: 2026-03-16
URL: https://daniliants.com/insights/1m-context-is-now-generally-available-for-opus-46-and-sonnet-46-claude/
Tags: claude, anthropic, context-window, llm-agents, claude-code, prompt-engineering, ai-coding

---

## Summary

Anthropic has made the full 1M token context window generally available for Claude Opus 4.6 and Sonnet 4.6 at standard pricing - no long-context premium. The key practical impact: Claude Code Max/Team/Enterprise users on Opus 4.6 now get 1M context automatically, meaning far fewer forced compactions and more intact sessions. Opus 4.6 scores 78.3% on MRCR v2 (multi-round context retrieval), the highest reported among frontier models at that context length.

## Key Insight

- **No pricing penalty**: $5/$25 per million tokens for Opus 4.6, $3/$15 for Sonnet 4.6 - same rate at 900K tokens as at 9K. Previously long-context incurred a multiplier.
- **Media limits expanded**: 600 images or PDF pages per request (up from lower limits).
- **78.3% MRCR v2 score**: This is the retrieval-in-context benchmark at 1M length. Useful calibration point - the model can find relevant details but is not perfect.
- **Fresh context > long context**: The most practised HN users strongly recommend starting new sessions rather than riding compactions deep into context. CLAUDE.md instructions dilute through compactions by default unless a hook re-inserts them.
- **Subagents are the real unlock**: Each subagent starts with a clean context; the orchestrator only sees results, keeping its own context low. This lets you churn through millions of output tokens across a session without degradation - documented at `code.claude.com/docs/en/agent-teams`.
- **Practical cap in use**: Experienced users rarely need 1M raw. Workflows using code maps (Flash for indexing) + auto-context (targeted file selection) keep individual requests to 30K–80K even on large repos. 1M is a ceiling, not a target.
- **Context poisoning is real**: Bad conversation history degrades performance. Rollback (Escape twice or `/rewind`) rather than steering is the correct recovery pattern.
- **AI-generated slop risk**: Non-technical users building apps with Opus alone produce working but insecure code. Production readiness is not the same as task completion.