Fast mode provides significantly faster output token generation for Claude Opus 4.6. By setting speed: "fast" in your API request, you get up to 2.5x higher output tokens per second from the same model at premium pricing.
Fast mode is currently in research preview. Join the waitlist to request access. Availability is limited while we gather feedback.
Fast mode is supported on the following models:
claude-opus-4-6)Fast mode runs the same model with a faster inference configuration. There is no change to intelligence or capabilities.
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "anthropic-beta: fast-mode-2026-02-01" \
--header "content-type: application/json" \
--data '{
"model": "claude-opus-4-6",
"max_tokens": 4096,
"speed": "fast",
"messages": [{
"role": "user",
"content": "Refactor this module to use dependency injection"
}]
}'Fast mode is priced at 6x standard Opus rates for prompts <= 200K tokens, and 12x standard Opus rates for prompts > 200K tokens. The following table shows pricing for Claude Opus 4.6 with fast mode:
| Context window | Input | Output |
|---|---|---|
| ≤ 200K input tokens | $30 / MTok | $150 / MTok |
| > 200K input tokens | $60 / MTok | $225 / MTok |
Fast mode pricing stacks with other pricing modifiers:
For complete pricing details, see the pricing page.
Fast mode has a dedicated rate limit that is separate from standard Opus rate limits. Unlike standard speed, which has separate limits for ≤200K and >200K input tokens, fast mode uses a single rate limit that covers the full context range. When your fast mode rate limit is exceeded, the API returns a 429 error with a retry-after header indicating when capacity will be available.
The response includes headers that indicate your fast mode rate limit status:
| Header | Description |
|---|---|
anthropic-fast-input-tokens-limit | Maximum fast mode input tokens per minute |
anthropic-fast-input-tokens-remaining | Remaining fast mode input tokens |
anthropic-fast-input-tokens-reset | Time when the fast mode input token limit resets |
anthropic-fast-output-tokens-limit | Maximum fast mode output tokens per minute |
anthropic-fast-output-tokens-remaining | Remaining fast mode output tokens |
anthropic-fast-output-tokens-reset | Time when the fast mode output token limit resets |
For tier-specific rate limits, see the rate limits page.
The response usage object includes a speed field that indicates which speed was used, either "fast" or "standard":
curl https://api.anthropic.com/v1/messages \
--header "x-api-key: $ANTHROPIC_API_KEY" \
--header "anthropic-version: 2023-06-01" \
--header "anthropic-beta: fast-mode-2026-02-01" \
--header "content-type: application/json" \
--data '{
"model": "claude-opus-4-6",
"max_tokens": 1024,
"speed": "fast",
"messages": [{"role": "user", "content": "Hello"}]
}'
{
"id": "msg_01XFDUDYJgAACzvnptvVoYEL",
"type": "message",
"role": "assistant",
...
"usage": {
"input_tokens": 523,
"output_tokens": 1842,
"speed": "fast"
}
}To track fast mode usage and costs across your organization, see the Usage and Cost API.
When fast mode rate limits are exceeded, the API returns a 429 error with a retry-after header. The Anthropic SDKs automatically retry these requests up to 2 times by default (configurable via max_retries), waiting for the server-specified delay before each retry. Since fast mode uses continuous token replenishment, the retry-after delay is typically short and requests succeed once capacity is available.
If you'd prefer to fall back to standard speed rather than wait for fast mode capacity, catch the rate limit error and retry without speed: "fast". Set max_retries to 0 on the initial fast request to skip automatic retries and fail immediately on rate limit errors.
Falling back from fast to standard speed will result in a prompt cache miss. Requests at different speeds do not share cached prefixes.
Since setting max_retries to 0 also disables retries for other transient errors (overloaded, internal server errors), the examples below re-issue the original request with default retries for those cases.
import anthropic
client = anthropic.Anthropic()
def create_message_with_fast_fallback(max_retries=None, max_attempts=3, **params):
try:
return client.beta.messages.create(**params, max_retries=max_retries)
except anthropic.RateLimitError:
if params.get("speed") == "fast":
del params["speed"]
return create_message_with_fast_fallback(**params)
raise
except (anthropic.InternalServerError, anthropic.OverloadedError, anthropic.APIConnectionError):
if max_attempts > 1:
return create_message_with_fast_fallback(max_attempts=max_attempts - 1, **params)
raise
message = create_message_with_fast_fallback(
model="claude-opus-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}],
betas=["fast-mode-2026-02-01"],
speed="fast",
max_retries=0,
)speed: "fast" with an unsupported model returns an error.View detailed fast mode pricing information.
Check rate limit tiers for fast mode.
Control token usage with the effort parameter.
Was this page helpful?