Batches
Create a Message Batch
Retrieve a Message Batch
List Message Batches
Cancel a Message Batch
Delete a Message Batch
Retrieve Message Batch results
ModelsExpand Collapse
class BetaDeletedMessageBatch:
ID of the Message Batch.
JsonValue; type "message_batch_deleted"constant"message_batch_deleted"constantDeleted object type.
Deleted object type.
For Message Batches, this is always "message_batch_deleted".
class BetaMessageBatch:
String idUnique object identifier.
Unique object identifier.
The format and length of IDs may change over time.
RFC 3339 datetime string representing the time at which the Message Batch was archived and its results became unavailable.
RFC 3339 datetime string representing the time at which cancellation was initiated for the Message Batch. Specified only if cancellation was initiated.
RFC 3339 datetime string representing the time at which the Message Batch was created.
Optional<LocalDateTime> endedAtRFC 3339 datetime string representing the time at which processing for the Message Batch ended. Specified only once processing ends.
RFC 3339 datetime string representing the time at which processing for the Message Batch ended. Specified only once processing ends.
Processing ends when every request in a Message Batch has either succeeded, errored, canceled, or expired.
RFC 3339 datetime string representing the time at which the Message Batch will expire and end processing, which is 24 hours after creation.
ProcessingStatus processingStatusProcessing status of the Message Batch.
Processing status of the Message Batch.
BetaMessageBatchRequestCounts requestCountsTallies requests within the Message Batch, categorized by their status.
Tallies requests within the Message Batch, categorized by their status.
Requests start as processing and move to one of the other statuses only once processing of the entire batch ends. The sum of all values always matches the total number of requests in the batch.
long canceledNumber of requests in the Message Batch that have been canceled.
Number of requests in the Message Batch that have been canceled.
This is zero until processing of the entire Message Batch has ended.
long erroredNumber of requests in the Message Batch that encountered an error.
Number of requests in the Message Batch that encountered an error.
This is zero until processing of the entire Message Batch has ended.
long expiredNumber of requests in the Message Batch that have expired.
Number of requests in the Message Batch that have expired.
This is zero until processing of the entire Message Batch has ended.
Number of requests in the Message Batch that are processing.
long succeededNumber of requests in the Message Batch that have completed successfully.
Number of requests in the Message Batch that have completed successfully.
This is zero until processing of the entire Message Batch has ended.
Optional<String> resultsUrlURL to a .jsonl file containing the results of the Message Batch requests. Specified only once processing ends.
URL to a .jsonl file containing the results of the Message Batch requests. Specified only once processing ends.
Results in the file are not guaranteed to be in the same order as requests. Use the custom_id field to match results to requests.
JsonValue; type "message_batch"constant"message_batch"constantObject type.
Object type.
For Message Batches, this is always "message_batch".
class BetaMessageBatchCanceledResult:
class BetaMessageBatchErroredResult:
BetaErrorResponse error
BetaError error
class BetaInvalidRequestError:
class BetaAuthenticationError:
class BetaBillingError:
class BetaPermissionError:
class BetaNotFoundError:
class BetaRateLimitError:
class BetaGatewayTimeoutError:
class BetaApiError:
class BetaOverloadedError:
class BetaMessageBatchExpiredResult:
class BetaMessageBatchIndividualResponse:This is a single line in the response .jsonl file and does not represent the response as a whole.
This is a single line in the response .jsonl file and does not represent the response as a whole.
String customIdDeveloper-provided ID created for each request in a Message Batch. Useful for matching results to requests, as results may be given out of request order.
Developer-provided ID created for each request in a Message Batch. Useful for matching results to requests, as results may be given out of request order.
Must be unique for each request within the Message Batch.
BetaMessageBatchResult resultProcessing result for this request.
Processing result for this request.
Contains a Message output if processing was successful, an error response if processing failed, or the reason why processing was not attempted, such as cancellation or expiration.
class BetaMessageBatchSucceededResult:
BetaMessage message
String idUnique object identifier.
Unique object identifier.
The format and length of IDs may change over time.
Optional<BetaContainer> containerInformation about the container used in the request (for the code execution tool)
Information about the container used in the request (for the code execution tool)
Identifier for the container used in this request
The time at which the container will expire.
Optional<List<BetaSkill>> skillsSkills loaded in the container
Skills loaded in the container
Skill ID
Type typeType of skill - either 'anthropic' (built-in) or 'custom' (user-defined)
Type of skill - either 'anthropic' (built-in) or 'custom' (user-defined)
Skill version or 'latest' for most recent version
List<BetaContentBlock> contentContent generated by the model.
Content generated by the model.
This is an array of content blocks, each of which has a type that determines its shape.
Example:
[{"type": "text", "text": "Hi, I'm Claude."}]
If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.
For example, if the input messages were:
[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("}
]
Then the response content might be:
[{"type": "text", "text": "B)"}]
class BetaTextBlock:
Optional<List<BetaTextCitation>> citationsCitations supporting the text block.
Citations supporting the text block.
The type of citation returned will depend on the type of document being cited. Citing a PDF results in page_location, plain text results in char_location, and content document results in content_block_location.
class BetaCitationCharLocation:
class BetaCitationPageLocation:
class BetaCitationContentBlockLocation:
class BetaCitationsWebSearchResultLocation:
class BetaCitationSearchResultLocation:
class BetaThinkingBlock:
class BetaRedactedThinkingBlock:
class BetaToolUseBlock:
Optional<Caller> callerTool invocation directly from the model.
Tool invocation directly from the model.
class BetaDirectCaller:Tool invocation directly from the model.
Tool invocation directly from the model.
class BetaServerToolCaller:Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
class BetaServerToolCaller20260120:
class BetaServerToolUseBlock:
Name name
Optional<Caller> callerTool invocation directly from the model.
Tool invocation directly from the model.
class BetaDirectCaller:Tool invocation directly from the model.
Tool invocation directly from the model.
class BetaServerToolCaller:Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
class BetaServerToolCaller20260120:
class BetaWebSearchToolResultBlock:
class BetaWebSearchToolResultError:
BetaWebSearchToolResultErrorCode errorCode
List<BetaWebSearchResultBlock>
Optional<Caller> callerTool invocation directly from the model.
Tool invocation directly from the model.
class BetaDirectCaller:Tool invocation directly from the model.
Tool invocation directly from the model.
class BetaServerToolCaller:Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
class BetaServerToolCaller20260120:
class BetaWebFetchToolResultBlock:
Content content
class BetaWebFetchToolResultErrorBlock:
BetaWebFetchToolResultErrorCode errorCode
class BetaWebFetchBlock:
BetaDocumentBlock content
Optional<BetaCitationConfig> citationsCitation configuration for the document
Citation configuration for the document
Source source
class BetaBase64PdfSource:
class BetaPlainTextSource:
The title of the document
ISO 8601 timestamp when the content was retrieved
Fetched content URL
Optional<Caller> callerTool invocation directly from the model.
Tool invocation directly from the model.
class BetaDirectCaller:Tool invocation directly from the model.
Tool invocation directly from the model.
class BetaServerToolCaller:Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
class BetaServerToolCaller20260120:
class BetaCodeExecutionToolResultBlock:
Code execution result with encrypted stdout for PFC + web_search results.
Code execution result with encrypted stdout for PFC + web_search results.
class BetaCodeExecutionToolResultError:
BetaCodeExecutionToolResultErrorCode errorCode
class BetaCodeExecutionResultBlock:
List<BetaCodeExecutionOutputBlock> content
class BetaEncryptedCodeExecutionResultBlock:Code execution result with encrypted stdout for PFC + web_search results.
Code execution result with encrypted stdout for PFC + web_search results.
List<BetaCodeExecutionOutputBlock> content
class BetaBashCodeExecutionToolResultBlock:
Content content
class BetaBashCodeExecutionToolResultError:
ErrorCode errorCode
class BetaBashCodeExecutionResultBlock:
List<BetaBashCodeExecutionOutputBlock> content
class BetaTextEditorCodeExecutionToolResultBlock:
Content content
class BetaTextEditorCodeExecutionToolResultError:
ErrorCode errorCode
class BetaTextEditorCodeExecutionViewResultBlock:
FileType fileType
class BetaTextEditorCodeExecutionCreateResultBlock:
class BetaTextEditorCodeExecutionStrReplaceResultBlock:
class BetaToolSearchToolResultBlock:
Content content
class BetaToolSearchToolResultError:
ErrorCode errorCode
class BetaToolSearchToolSearchResultBlock:
List<BetaToolReferenceBlock> toolReferences
class BetaMcpToolUseBlock:
The name of the MCP tool
The name of the MCP server
class BetaMcpToolResultBlock:
Content content
List<BetaTextBlock>
Optional<List<BetaTextCitation>> citationsCitations supporting the text block.
Citations supporting the text block.
The type of citation returned will depend on the type of document being cited. Citing a PDF results in page_location, plain text results in char_location, and content document results in content_block_location.
class BetaCitationCharLocation:
class BetaCitationPageLocation:
class BetaCitationContentBlockLocation:
class BetaCitationsWebSearchResultLocation:
class BetaCitationSearchResultLocation:
class BetaContainerUploadBlock:Response model for a file uploaded to the container.
Response model for a file uploaded to the container.
class BetaCompactionBlock:A compaction block returned when autocompact is triggered.
A compaction block returned when autocompact is triggered.
When content is None, it indicates the compaction failed to produce a valid summary (e.g., malformed output from the model). Clients may round-trip compaction blocks with null content; the server treats them as no-ops.
Summary of compacted content, or null if compaction failed
Optional<BetaContextManagementResponse> contextManagementContext management response.
Context management response.
Information about context management strategies applied during the request.
List<AppliedEdit> appliedEditsList of context management edits that were applied.
List of context management edits that were applied.
class BetaClearToolUses20250919EditResponse:
Number of input tokens cleared by this edit.
Number of tool uses that were cleared.
The type of context management edit applied.
class BetaClearThinking20251015EditResponse:
Number of input tokens cleared by this edit.
Number of thinking turns that were cleared.
The type of context management edit applied.
Model modelThe model that will complete your prompt.
The model that will complete your prompt.
See models for additional details and options.
Most intelligent model for building agents and coding
Frontier intelligence at scale — built for coding, agents, and enterprise workflows
Premium model combining maximum intelligence with practical performance
Premium model combining maximum intelligence with practical performance
High-performance model with early extended thinking
High-performance model with early extended thinking
Fastest and most compact model for near-instant responsiveness
Our fastest model
Hybrid model, capable of near-instant responses and extended thinking
Hybrid model, capable of near-instant responses and extended thinking
High-performance model with extended thinking
High-performance model with extended thinking
High-performance model with extended thinking
Our best model for real-world agents and coding
Our best model for real-world agents and coding
Our most capable model
Our most capable model
Our most capable model
Our most capable model
Excels at writing and complex tasks
Excels at writing and complex tasks
Our previous most fast and cost-effective
JsonValue; role "assistant"constant"assistant"constantConversational role of the generated message.
Conversational role of the generated message.
This will always be "assistant".
Optional<BetaStopReason> stopReasonThe reason that we stopped.
The reason that we stopped.
This may be one the following values:
"end_turn": the model reached a natural stopping point"max_tokens": we exceeded the requestedmax_tokensor the model's maximum"stop_sequence": one of your provided customstop_sequenceswas generated"tool_use": the model invoked one or more tools"pause_turn": we paused a long-running turn. You may provide the response back as-is in a subsequent request to let the model continue."refusal": when streaming classifiers intervene to handle potential policy violations
In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.
Optional<String> stopSequenceWhich custom stop sequence was generated, if any.
Which custom stop sequence was generated, if any.
This value will be a non-null string if one of your custom stop sequences was generated.
JsonValue; type "message"constant"message"constantObject type.
Object type.
For Messages, this is always "message".
BetaUsage usageBilling and rate-limit usage.
Billing and rate-limit usage.
Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.
Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.
For example, output_tokens will be non-zero, even for an empty string response from Claude.
Total input tokens in a request is the summation of input_tokens, cache_creation_input_tokens, and cache_read_input_tokens.
Optional<BetaCacheCreation> cacheCreationBreakdown of cached tokens by TTL
Breakdown of cached tokens by TTL
The number of input tokens used to create the 1 hour cache entry.
The number of input tokens used to create the 5 minute cache entry.
The number of input tokens used to create the cache entry.
The number of input tokens read from the cache.
The geographic region where inference was performed for this request.
The number of input tokens which were used.
Optional<List<BetaIterationsUsageItems>> iterationsPer-iteration token usage breakdown.
Per-iteration token usage breakdown.
Each entry represents one sampling iteration, with its own input/output token counts and cache statistics. This allows you to:
- Determine which iterations exceeded long context thresholds (>=200k tokens)
- Calculate the true context window size from the last iteration
- Understand token accumulation across server-side tool use loops
class BetaMessageIterationUsage:Token usage for a sampling iteration.
Token usage for a sampling iteration.
Optional<BetaCacheCreation> cacheCreationBreakdown of cached tokens by TTL
Breakdown of cached tokens by TTL
The number of input tokens used to create the 1 hour cache entry.
The number of input tokens used to create the 5 minute cache entry.
The number of input tokens used to create the cache entry.
The number of input tokens read from the cache.
The number of input tokens which were used.
The number of output tokens which were used.
Usage for a sampling iteration
class BetaCompactionIterationUsage:Token usage for a compaction iteration.
Token usage for a compaction iteration.
Optional<BetaCacheCreation> cacheCreationBreakdown of cached tokens by TTL
Breakdown of cached tokens by TTL
The number of input tokens used to create the 1 hour cache entry.
The number of input tokens used to create the 5 minute cache entry.
The number of input tokens used to create the cache entry.
The number of input tokens read from the cache.
The number of input tokens which were used.
The number of output tokens which were used.
Usage for a compaction iteration
The number of output tokens which were used.
Optional<BetaServerToolUsage> serverToolUseThe number of server tool requests.
The number of server tool requests.
The number of web fetch tool requests.
The number of web search tool requests.
Optional<ServiceTier> serviceTierIf the request used the priority, standard, or batch tier.
If the request used the priority, standard, or batch tier.
Optional<Speed> speedThe inference speed mode used for this request.
The inference speed mode used for this request.
class BetaMessageBatchErroredResult:
BetaErrorResponse error
BetaError error
class BetaInvalidRequestError:
class BetaAuthenticationError:
class BetaBillingError:
class BetaPermissionError:
class BetaNotFoundError:
class BetaRateLimitError:
class BetaGatewayTimeoutError:
class BetaApiError:
class BetaOverloadedError:
class BetaMessageBatchCanceledResult:
class BetaMessageBatchExpiredResult:
class BetaMessageBatchRequestCounts:
long canceledNumber of requests in the Message Batch that have been canceled.
Number of requests in the Message Batch that have been canceled.
This is zero until processing of the entire Message Batch has ended.
long erroredNumber of requests in the Message Batch that encountered an error.
Number of requests in the Message Batch that encountered an error.
This is zero until processing of the entire Message Batch has ended.
long expiredNumber of requests in the Message Batch that have expired.
Number of requests in the Message Batch that have expired.
This is zero until processing of the entire Message Batch has ended.
Number of requests in the Message Batch that are processing.
long succeededNumber of requests in the Message Batch that have completed successfully.
Number of requests in the Message Batch that have completed successfully.
This is zero until processing of the entire Message Batch has ended.
class BetaMessageBatchResult: A class that can be one of several variants.union Processing result for this request.
Processing result for this request.
Contains a Message output if processing was successful, an error response if processing failed, or the reason why processing was not attempted, such as cancellation or expiration.
class BetaMessageBatchSucceededResult:
BetaMessage message
String idUnique object identifier.
Unique object identifier.
The format and length of IDs may change over time.
Optional<BetaContainer> containerInformation about the container used in the request (for the code execution tool)
Information about the container used in the request (for the code execution tool)
Identifier for the container used in this request
The time at which the container will expire.
Optional<List<BetaSkill>> skillsSkills loaded in the container
Skills loaded in the container
Skill ID
Type typeType of skill - either 'anthropic' (built-in) or 'custom' (user-defined)
Type of skill - either 'anthropic' (built-in) or 'custom' (user-defined)
Skill version or 'latest' for most recent version
List<BetaContentBlock> contentContent generated by the model.
Content generated by the model.
This is an array of content blocks, each of which has a type that determines its shape.
Example:
[{"type": "text", "text": "Hi, I'm Claude."}]
If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.
For example, if the input messages were:
[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("}
]
Then the response content might be:
[{"type": "text", "text": "B)"}]
class BetaTextBlock:
Optional<List<BetaTextCitation>> citationsCitations supporting the text block.
Citations supporting the text block.
The type of citation returned will depend on the type of document being cited. Citing a PDF results in page_location, plain text results in char_location, and content document results in content_block_location.
class BetaCitationCharLocation:
class BetaCitationPageLocation:
class BetaCitationContentBlockLocation:
class BetaCitationsWebSearchResultLocation:
class BetaCitationSearchResultLocation:
class BetaThinkingBlock:
class BetaRedactedThinkingBlock:
class BetaToolUseBlock:
Optional<Caller> callerTool invocation directly from the model.
Tool invocation directly from the model.
class BetaDirectCaller:Tool invocation directly from the model.
Tool invocation directly from the model.
class BetaServerToolCaller:Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
class BetaServerToolCaller20260120:
class BetaServerToolUseBlock:
Name name
Optional<Caller> callerTool invocation directly from the model.
Tool invocation directly from the model.
class BetaDirectCaller:Tool invocation directly from the model.
Tool invocation directly from the model.
class BetaServerToolCaller:Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
class BetaServerToolCaller20260120:
class BetaWebSearchToolResultBlock:
class BetaWebSearchToolResultError:
BetaWebSearchToolResultErrorCode errorCode
List<BetaWebSearchResultBlock>
Optional<Caller> callerTool invocation directly from the model.
Tool invocation directly from the model.
class BetaDirectCaller:Tool invocation directly from the model.
Tool invocation directly from the model.
class BetaServerToolCaller:Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
class BetaServerToolCaller20260120:
class BetaWebFetchToolResultBlock:
Content content
class BetaWebFetchToolResultErrorBlock:
BetaWebFetchToolResultErrorCode errorCode
class BetaWebFetchBlock:
BetaDocumentBlock content
Optional<BetaCitationConfig> citationsCitation configuration for the document
Citation configuration for the document
Source source
class BetaBase64PdfSource:
class BetaPlainTextSource:
The title of the document
ISO 8601 timestamp when the content was retrieved
Fetched content URL
Optional<Caller> callerTool invocation directly from the model.
Tool invocation directly from the model.
class BetaDirectCaller:Tool invocation directly from the model.
Tool invocation directly from the model.
class BetaServerToolCaller:Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
class BetaServerToolCaller20260120:
class BetaCodeExecutionToolResultBlock:
Code execution result with encrypted stdout for PFC + web_search results.
Code execution result with encrypted stdout for PFC + web_search results.
class BetaCodeExecutionToolResultError:
BetaCodeExecutionToolResultErrorCode errorCode
class BetaCodeExecutionResultBlock:
List<BetaCodeExecutionOutputBlock> content
class BetaEncryptedCodeExecutionResultBlock:Code execution result with encrypted stdout for PFC + web_search results.
Code execution result with encrypted stdout for PFC + web_search results.
List<BetaCodeExecutionOutputBlock> content
class BetaBashCodeExecutionToolResultBlock:
Content content
class BetaBashCodeExecutionToolResultError:
ErrorCode errorCode
class BetaBashCodeExecutionResultBlock:
List<BetaBashCodeExecutionOutputBlock> content
class BetaTextEditorCodeExecutionToolResultBlock:
Content content
class BetaTextEditorCodeExecutionToolResultError:
ErrorCode errorCode
class BetaTextEditorCodeExecutionViewResultBlock:
FileType fileType
class BetaTextEditorCodeExecutionCreateResultBlock:
class BetaTextEditorCodeExecutionStrReplaceResultBlock:
class BetaToolSearchToolResultBlock:
Content content
class BetaToolSearchToolResultError:
ErrorCode errorCode
class BetaToolSearchToolSearchResultBlock:
List<BetaToolReferenceBlock> toolReferences
class BetaMcpToolUseBlock:
The name of the MCP tool
The name of the MCP server
class BetaMcpToolResultBlock:
Content content
List<BetaTextBlock>
Optional<List<BetaTextCitation>> citationsCitations supporting the text block.
Citations supporting the text block.
The type of citation returned will depend on the type of document being cited. Citing a PDF results in page_location, plain text results in char_location, and content document results in content_block_location.
class BetaCitationCharLocation:
class BetaCitationPageLocation:
class BetaCitationContentBlockLocation:
class BetaCitationsWebSearchResultLocation:
class BetaCitationSearchResultLocation:
class BetaContainerUploadBlock:Response model for a file uploaded to the container.
Response model for a file uploaded to the container.
class BetaCompactionBlock:A compaction block returned when autocompact is triggered.
A compaction block returned when autocompact is triggered.
When content is None, it indicates the compaction failed to produce a valid summary (e.g., malformed output from the model). Clients may round-trip compaction blocks with null content; the server treats them as no-ops.
Summary of compacted content, or null if compaction failed
Optional<BetaContextManagementResponse> contextManagementContext management response.
Context management response.
Information about context management strategies applied during the request.
List<AppliedEdit> appliedEditsList of context management edits that were applied.
List of context management edits that were applied.
class BetaClearToolUses20250919EditResponse:
Number of input tokens cleared by this edit.
Number of tool uses that were cleared.
The type of context management edit applied.
class BetaClearThinking20251015EditResponse:
Number of input tokens cleared by this edit.
Number of thinking turns that were cleared.
The type of context management edit applied.
Model modelThe model that will complete your prompt.
The model that will complete your prompt.
See models for additional details and options.
Most intelligent model for building agents and coding
Frontier intelligence at scale — built for coding, agents, and enterprise workflows
Premium model combining maximum intelligence with practical performance
Premium model combining maximum intelligence with practical performance
High-performance model with early extended thinking
High-performance model with early extended thinking
Fastest and most compact model for near-instant responsiveness
Our fastest model
Hybrid model, capable of near-instant responses and extended thinking
Hybrid model, capable of near-instant responses and extended thinking
High-performance model with extended thinking
High-performance model with extended thinking
High-performance model with extended thinking
Our best model for real-world agents and coding
Our best model for real-world agents and coding
Our most capable model
Our most capable model
Our most capable model
Our most capable model
Excels at writing and complex tasks
Excels at writing and complex tasks
Our previous most fast and cost-effective
JsonValue; role "assistant"constant"assistant"constantConversational role of the generated message.
Conversational role of the generated message.
This will always be "assistant".
Optional<BetaStopReason> stopReasonThe reason that we stopped.
The reason that we stopped.
This may be one the following values:
"end_turn": the model reached a natural stopping point"max_tokens": we exceeded the requestedmax_tokensor the model's maximum"stop_sequence": one of your provided customstop_sequenceswas generated"tool_use": the model invoked one or more tools"pause_turn": we paused a long-running turn. You may provide the response back as-is in a subsequent request to let the model continue."refusal": when streaming classifiers intervene to handle potential policy violations
In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.
Optional<String> stopSequenceWhich custom stop sequence was generated, if any.
Which custom stop sequence was generated, if any.
This value will be a non-null string if one of your custom stop sequences was generated.
JsonValue; type "message"constant"message"constantObject type.
Object type.
For Messages, this is always "message".
BetaUsage usageBilling and rate-limit usage.
Billing and rate-limit usage.
Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.
Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.
For example, output_tokens will be non-zero, even for an empty string response from Claude.
Total input tokens in a request is the summation of input_tokens, cache_creation_input_tokens, and cache_read_input_tokens.
Optional<BetaCacheCreation> cacheCreationBreakdown of cached tokens by TTL
Breakdown of cached tokens by TTL
The number of input tokens used to create the 1 hour cache entry.
The number of input tokens used to create the 5 minute cache entry.
The number of input tokens used to create the cache entry.
The number of input tokens read from the cache.
The geographic region where inference was performed for this request.
The number of input tokens which were used.
Optional<List<BetaIterationsUsageItems>> iterationsPer-iteration token usage breakdown.
Per-iteration token usage breakdown.
Each entry represents one sampling iteration, with its own input/output token counts and cache statistics. This allows you to:
- Determine which iterations exceeded long context thresholds (>=200k tokens)
- Calculate the true context window size from the last iteration
- Understand token accumulation across server-side tool use loops
class BetaMessageIterationUsage:Token usage for a sampling iteration.
Token usage for a sampling iteration.
Optional<BetaCacheCreation> cacheCreationBreakdown of cached tokens by TTL
Breakdown of cached tokens by TTL
The number of input tokens used to create the 1 hour cache entry.
The number of input tokens used to create the 5 minute cache entry.
The number of input tokens used to create the cache entry.
The number of input tokens read from the cache.
The number of input tokens which were used.
The number of output tokens which were used.
Usage for a sampling iteration
class BetaCompactionIterationUsage:Token usage for a compaction iteration.
Token usage for a compaction iteration.
Optional<BetaCacheCreation> cacheCreationBreakdown of cached tokens by TTL
Breakdown of cached tokens by TTL
The number of input tokens used to create the 1 hour cache entry.
The number of input tokens used to create the 5 minute cache entry.
The number of input tokens used to create the cache entry.
The number of input tokens read from the cache.
The number of input tokens which were used.
The number of output tokens which were used.
Usage for a compaction iteration
The number of output tokens which were used.
Optional<BetaServerToolUsage> serverToolUseThe number of server tool requests.
The number of server tool requests.
The number of web fetch tool requests.
The number of web search tool requests.
Optional<ServiceTier> serviceTierIf the request used the priority, standard, or batch tier.
If the request used the priority, standard, or batch tier.
Optional<Speed> speedThe inference speed mode used for this request.
The inference speed mode used for this request.
class BetaMessageBatchErroredResult:
BetaErrorResponse error
BetaError error
class BetaInvalidRequestError:
class BetaAuthenticationError:
class BetaBillingError:
class BetaPermissionError:
class BetaNotFoundError:
class BetaRateLimitError:
class BetaGatewayTimeoutError:
class BetaApiError:
class BetaOverloadedError:
class BetaMessageBatchCanceledResult:
class BetaMessageBatchExpiredResult:
class BetaMessageBatchSucceededResult:
BetaMessage message
String idUnique object identifier.
Unique object identifier.
The format and length of IDs may change over time.
Optional<BetaContainer> containerInformation about the container used in the request (for the code execution tool)
Information about the container used in the request (for the code execution tool)
Identifier for the container used in this request
The time at which the container will expire.
Optional<List<BetaSkill>> skillsSkills loaded in the container
Skills loaded in the container
Skill ID
Type typeType of skill - either 'anthropic' (built-in) or 'custom' (user-defined)
Type of skill - either 'anthropic' (built-in) or 'custom' (user-defined)
Skill version or 'latest' for most recent version
List<BetaContentBlock> contentContent generated by the model.
Content generated by the model.
This is an array of content blocks, each of which has a type that determines its shape.
Example:
[{"type": "text", "text": "Hi, I'm Claude."}]
If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.
For example, if the input messages were:
[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("}
]
Then the response content might be:
[{"type": "text", "text": "B)"}]
class BetaTextBlock:
Optional<List<BetaTextCitation>> citationsCitations supporting the text block.
Citations supporting the text block.
The type of citation returned will depend on the type of document being cited. Citing a PDF results in page_location, plain text results in char_location, and content document results in content_block_location.
class BetaCitationCharLocation:
class BetaCitationPageLocation:
class BetaCitationContentBlockLocation:
class BetaCitationsWebSearchResultLocation:
class BetaCitationSearchResultLocation:
class BetaThinkingBlock:
class BetaRedactedThinkingBlock:
class BetaToolUseBlock:
Optional<Caller> callerTool invocation directly from the model.
Tool invocation directly from the model.
class BetaDirectCaller:Tool invocation directly from the model.
Tool invocation directly from the model.
class BetaServerToolCaller:Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
class BetaServerToolCaller20260120:
class BetaServerToolUseBlock:
Name name
Optional<Caller> callerTool invocation directly from the model.
Tool invocation directly from the model.
class BetaDirectCaller:Tool invocation directly from the model.
Tool invocation directly from the model.
class BetaServerToolCaller:Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
class BetaServerToolCaller20260120:
class BetaWebSearchToolResultBlock:
class BetaWebSearchToolResultError:
BetaWebSearchToolResultErrorCode errorCode
List<BetaWebSearchResultBlock>
Optional<Caller> callerTool invocation directly from the model.
Tool invocation directly from the model.
class BetaDirectCaller:Tool invocation directly from the model.
Tool invocation directly from the model.
class BetaServerToolCaller:Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
class BetaServerToolCaller20260120:
class BetaWebFetchToolResultBlock:
Content content
class BetaWebFetchToolResultErrorBlock:
BetaWebFetchToolResultErrorCode errorCode
class BetaWebFetchBlock:
BetaDocumentBlock content
Optional<BetaCitationConfig> citationsCitation configuration for the document
Citation configuration for the document
Source source
class BetaBase64PdfSource:
class BetaPlainTextSource:
The title of the document
ISO 8601 timestamp when the content was retrieved
Fetched content URL
Optional<Caller> callerTool invocation directly from the model.
Tool invocation directly from the model.
class BetaDirectCaller:Tool invocation directly from the model.
Tool invocation directly from the model.
class BetaServerToolCaller:Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
class BetaServerToolCaller20260120:
class BetaCodeExecutionToolResultBlock:
Code execution result with encrypted stdout for PFC + web_search results.
Code execution result with encrypted stdout for PFC + web_search results.
class BetaCodeExecutionToolResultError:
BetaCodeExecutionToolResultErrorCode errorCode
class BetaCodeExecutionResultBlock:
List<BetaCodeExecutionOutputBlock> content
class BetaEncryptedCodeExecutionResultBlock:Code execution result with encrypted stdout for PFC + web_search results.
Code execution result with encrypted stdout for PFC + web_search results.
List<BetaCodeExecutionOutputBlock> content
class BetaBashCodeExecutionToolResultBlock:
Content content
class BetaBashCodeExecutionToolResultError:
ErrorCode errorCode
class BetaBashCodeExecutionResultBlock:
List<BetaBashCodeExecutionOutputBlock> content
class BetaTextEditorCodeExecutionToolResultBlock:
Content content
class BetaTextEditorCodeExecutionToolResultError:
ErrorCode errorCode
class BetaTextEditorCodeExecutionViewResultBlock:
FileType fileType
class BetaTextEditorCodeExecutionCreateResultBlock:
class BetaTextEditorCodeExecutionStrReplaceResultBlock:
class BetaToolSearchToolResultBlock:
Content content
class BetaToolSearchToolResultError:
ErrorCode errorCode
class BetaToolSearchToolSearchResultBlock:
List<BetaToolReferenceBlock> toolReferences
class BetaMcpToolUseBlock:
The name of the MCP tool
The name of the MCP server
class BetaMcpToolResultBlock:
Content content
List<BetaTextBlock>
Optional<List<BetaTextCitation>> citationsCitations supporting the text block.
Citations supporting the text block.
The type of citation returned will depend on the type of document being cited. Citing a PDF results in page_location, plain text results in char_location, and content document results in content_block_location.
class BetaCitationCharLocation:
class BetaCitationPageLocation:
class BetaCitationContentBlockLocation:
class BetaCitationsWebSearchResultLocation:
class BetaCitationSearchResultLocation:
class BetaContainerUploadBlock:Response model for a file uploaded to the container.
Response model for a file uploaded to the container.
class BetaCompactionBlock:A compaction block returned when autocompact is triggered.
A compaction block returned when autocompact is triggered.
When content is None, it indicates the compaction failed to produce a valid summary (e.g., malformed output from the model). Clients may round-trip compaction blocks with null content; the server treats them as no-ops.
Summary of compacted content, or null if compaction failed
Optional<BetaContextManagementResponse> contextManagementContext management response.
Context management response.
Information about context management strategies applied during the request.
List<AppliedEdit> appliedEditsList of context management edits that were applied.
List of context management edits that were applied.
class BetaClearToolUses20250919EditResponse:
Number of input tokens cleared by this edit.
Number of tool uses that were cleared.
The type of context management edit applied.
class BetaClearThinking20251015EditResponse:
Number of input tokens cleared by this edit.
Number of thinking turns that were cleared.
The type of context management edit applied.
Model modelThe model that will complete your prompt.
The model that will complete your prompt.
See models for additional details and options.
Most intelligent model for building agents and coding
Frontier intelligence at scale — built for coding, agents, and enterprise workflows
Premium model combining maximum intelligence with practical performance
Premium model combining maximum intelligence with practical performance
High-performance model with early extended thinking
High-performance model with early extended thinking
Fastest and most compact model for near-instant responsiveness
Our fastest model
Hybrid model, capable of near-instant responses and extended thinking
Hybrid model, capable of near-instant responses and extended thinking
High-performance model with extended thinking
High-performance model with extended thinking
High-performance model with extended thinking
Our best model for real-world agents and coding
Our best model for real-world agents and coding
Our most capable model
Our most capable model
Our most capable model
Our most capable model
Excels at writing and complex tasks
Excels at writing and complex tasks
Our previous most fast and cost-effective
JsonValue; role "assistant"constant"assistant"constantConversational role of the generated message.
Conversational role of the generated message.
This will always be "assistant".
Optional<BetaStopReason> stopReasonThe reason that we stopped.
The reason that we stopped.
This may be one the following values:
"end_turn": the model reached a natural stopping point"max_tokens": we exceeded the requestedmax_tokensor the model's maximum"stop_sequence": one of your provided customstop_sequenceswas generated"tool_use": the model invoked one or more tools"pause_turn": we paused a long-running turn. You may provide the response back as-is in a subsequent request to let the model continue."refusal": when streaming classifiers intervene to handle potential policy violations
In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.
Optional<String> stopSequenceWhich custom stop sequence was generated, if any.
Which custom stop sequence was generated, if any.
This value will be a non-null string if one of your custom stop sequences was generated.
JsonValue; type "message"constant"message"constantObject type.
Object type.
For Messages, this is always "message".
BetaUsage usageBilling and rate-limit usage.
Billing and rate-limit usage.
Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.
Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.
For example, output_tokens will be non-zero, even for an empty string response from Claude.
Total input tokens in a request is the summation of input_tokens, cache_creation_input_tokens, and cache_read_input_tokens.
Optional<BetaCacheCreation> cacheCreationBreakdown of cached tokens by TTL
Breakdown of cached tokens by TTL
The number of input tokens used to create the 1 hour cache entry.
The number of input tokens used to create the 5 minute cache entry.
The number of input tokens used to create the cache entry.
The number of input tokens read from the cache.
The geographic region where inference was performed for this request.
The number of input tokens which were used.
Optional<List<BetaIterationsUsageItems>> iterationsPer-iteration token usage breakdown.
Per-iteration token usage breakdown.
Each entry represents one sampling iteration, with its own input/output token counts and cache statistics. This allows you to:
- Determine which iterations exceeded long context thresholds (>=200k tokens)
- Calculate the true context window size from the last iteration
- Understand token accumulation across server-side tool use loops
class BetaMessageIterationUsage:Token usage for a sampling iteration.
Token usage for a sampling iteration.
Optional<BetaCacheCreation> cacheCreationBreakdown of cached tokens by TTL
Breakdown of cached tokens by TTL
The number of input tokens used to create the 1 hour cache entry.
The number of input tokens used to create the 5 minute cache entry.
The number of input tokens used to create the cache entry.
The number of input tokens read from the cache.
The number of input tokens which were used.
The number of output tokens which were used.
Usage for a sampling iteration
class BetaCompactionIterationUsage:Token usage for a compaction iteration.
Token usage for a compaction iteration.
Optional<BetaCacheCreation> cacheCreationBreakdown of cached tokens by TTL
Breakdown of cached tokens by TTL
The number of input tokens used to create the 1 hour cache entry.
The number of input tokens used to create the 5 minute cache entry.
The number of input tokens used to create the cache entry.
The number of input tokens read from the cache.
The number of input tokens which were used.
The number of output tokens which were used.
Usage for a compaction iteration
The number of output tokens which were used.
Optional<BetaServerToolUsage> serverToolUseThe number of server tool requests.
The number of server tool requests.
The number of web fetch tool requests.
The number of web search tool requests.
Optional<ServiceTier> serviceTierIf the request used the priority, standard, or batch tier.
If the request used the priority, standard, or batch tier.
Optional<Speed> speedThe inference speed mode used for this request.
The inference speed mode used for this request.