As conversations grow, you'll eventually approach context window limits. This guide explains how context windows work and introduces strategies for managing them effectively.
For long-running conversations and agentic workflows, server-side compaction is the primary strategy for context management. For more specialized needs, context editing offers additional strategies like tool result clearing and thinking block clearing.
The "context window" refers to all the text a language model can reference when generating a response, including the response itself. This is different from the large corpus of data the language model was trained on, and instead represents a "working memory" for the model. A larger context window allows the model to handle more complex and lengthy prompts. A smaller context window may limit the model's ability to maintain coherence over extended conversations.
The diagram below illustrates the standard context window behavior for API requests1:
1For chat interfaces, such as for claude.ai, context windows can also be set up on a rolling "first in, first out" system.
When using extended thinking, all input and output tokens, including the tokens used for thinking, count toward the context window limit, with a few nuances in multi-turn situations.
The thinking budget tokens are a subset of your max_tokens parameter, are billed as output tokens, and count towards rate limits. With adaptive thinking, Claude dynamically decides its thinking allocation, so actual thinking token usage may vary per request.
However, previous thinking blocks are automatically stripped from the context window calculation by the Claude API and are not part of the conversation history that the model "sees" for subsequent turns, preserving token capacity for actual conversation content.
The diagram below demonstrates the specialized token management when extended thinking is enabled:
context_window = (input_tokens - previous_thinking_tokens) + current_turn_tokens.thinking blocks and redacted_thinking blocks.This architecture is token efficient and allows for extensive reasoning without token waste, as thinking blocks can be substantial in length.
You can read more about the context window and extended thinking in the extended thinking guide.
The diagram below illustrates the context window token management when combining extended thinking with tool use:
First turn architecture
Tool result handling (turn 2)
tool_result. The extended thinking block must be returned with the corresponding tool results. This is the only case wherein you have to return thinking blocks.user message).Third Step
User turn.User turn outside of the tool use cycle, Claude will generate a new extended thinking block and continue from there.Assistant turn counts as part of the context window.context_window = input_tokens + current_turn_tokens.Claude 4 models support interleaved thinking, which enables Claude to think between tool calls and make more sophisticated reasoning after receiving tool results.
Claude Sonnet 3.7 does not support interleaved thinking, so there is no interleaving of extended thinking and tool calls without a non-tool_result user turn in between.
For more information about using tools with extended thinking, see the extended thinking guide.
Claude Opus 4.6, Sonnet 4.5, and Sonnet 4 support a 1-million token context window. This extended context window allows you to process much larger documents, maintain longer conversations, and work with more extensive codebases.
The 1M token context window is currently in beta for organizations in usage tier 4 and organizations with custom rate limits. The 1M token context window is only available for Claude Opus 4.6, Sonnet 4.5, and Sonnet 4.
To use the 1M token context window, include the context-1m-2025-08-07 beta header in your API requests:
curl https://api.anthropic.com/v1/messages \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "anthropic-beta: context-1m-2025-08-07" \
-H "content-type: application/json" \
-d '{
"model": "claude-opus-4-6",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Process this large document..."}
]
}'Important considerations:
Claude Sonnet 4.5 and Claude Haiku 4.5 feature context awareness. This capability lets these models track their remaining context window (i.e. "token budget") throughout a conversation. This enables Claude to execute tasks and manage context more effectively by understanding how much space it has to work. Claude is trained to use this context precisely, persisting in the task until the very end rather than guessing how many tokens remain. For a model, lacking context awareness is like competing in a cooking show without a clock. Claude 4.5 models change this by explicitly informing the model about its remaining context, so it can take maximum advantage of the available tokens.
How it works:
At the start of a conversation, Claude receives information about its total context window:
<budget:token_budget>200000</budget:token_budget>The budget is set to 200K tokens (standard), 500K tokens (claude.ai Enterprise), or 1M tokens (beta, for eligible organizations).
After each tool call, Claude receives an update on remaining capacity:
<system_warning>Token usage: 35000/200000; 165000 remaining</system_warning>This awareness helps Claude determine how much capacity remains for work and enables more effective execution on long-running tasks. Image tokens are included in these budgets.
Benefits:
Context awareness is particularly valuable for:
For prompting guidance on leveraging context awareness, see the prompting best practices guide.
If your conversations regularly approach context window limits, server-side compaction is the recommended approach. Compaction provides server-side summarization that automatically condenses earlier parts of a conversation, enabling long-running conversations beyond context limits with minimal integration work. It is currently available in beta for Claude Opus 4.6.
For more specialized needs, context editing offers additional strategies:
Newer Claude models (starting with Claude Sonnet 3.7) return a validation error when prompt and output tokens exceed the context window, rather than silently truncating. This change provides more predictable behavior but requires more careful token management.
Use the token counting API to estimate token usage before sending messages to Claude. This helps you plan and stay within context window limits.
See the model comparison table for a list of context window sizes by model.
The recommended strategy for managing context in long-running conversations.
Fine-grained strategies like tool result clearing and thinking block clearing.
See the model comparison table for a list of context window sizes and input / output token pricing by model.
Learn more about how extended thinking works and how to implement it alongside other features such as tool use and prompt caching.
Was this page helpful?