Retrieve Message Batch results
Streams the results of a Message Batch as a .jsonl file.
Each line in the file is a JSON object containing the result of a single request in the Message Batch. Results are not guaranteed to be in the same order as requests. Use the custom_id field to match results to requests.
Learn more about the Message Batches API in our user guide
Path ParametersExpand Collapse
ID of the Message Batch.
ReturnsExpand Collapse
MessageBatchIndividualResponse = object { custom_id, result } This is a single line in the response .jsonl file and does not represent the response as a whole.
This is a single line in the response .jsonl file and does not represent the response as a whole.
custom_id: stringDeveloper-provided ID created for each request in a Message Batch. Useful for matching results to requests, as results may be given out of request order.
Developer-provided ID created for each request in a Message Batch. Useful for matching results to requests, as results may be given out of request order.
Must be unique for each request within the Message Batch.
result: MessageBatchResultProcessing result for this request.
Processing result for this request.
Contains a Message output if processing was successful, an error response if processing failed, or the reason why processing was not attempted, such as cancellation or expiration.
MessageBatchSucceededResult = object { message, type }
message: Message { id, container, content, 6 more }
id: stringUnique object identifier.
Unique object identifier.
The format and length of IDs may change over time.
container: Container { id, expires_at } Information about the container used in the request (for the code execution tool)
Information about the container used in the request (for the code execution tool)
Identifier for the container used in this request
The time at which the container will expire.
content: array of ContentBlockContent generated by the model.
Content generated by the model.
This is an array of content blocks, each of which has a type that determines its shape.
Example:
[{"type": "text", "text": "Hi, I'm Claude."}]
If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.
For example, if the input messages were:
[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("}
]
Then the response content might be:
[{"type": "text", "text": "B)"}]
TextBlock = object { citations, text, type }
citations: array of TextCitationCitations supporting the text block.
Citations supporting the text block.
The type of citation returned will depend on the type of document being cited. Citing a PDF results in page_location, plain text results in char_location, and content document results in content_block_location.
CitationCharLocation = object { cited_text, document_index, document_title, 4 more }
CitationPageLocation = object { cited_text, document_index, document_title, 4 more }
CitationContentBlockLocation = object { cited_text, document_index, document_title, 4 more }
CitationsWebSearchResultLocation = object { cited_text, encrypted_index, title, 2 more }
CitationsSearchResultLocation = object { cited_text, end_block_index, search_result_index, 4 more }
ThinkingBlock = object { signature, thinking, type }
RedactedThinkingBlock = object { data, type }
ToolUseBlock = object { id, caller, input, 2 more }
caller: DirectCaller { type } or ServerToolCaller { tool_id, type } or ServerToolCaller20260120 { tool_id, type } Tool invocation directly from the model.
Tool invocation directly from the model.
DirectCaller = object { type } Tool invocation directly from the model.
Tool invocation directly from the model.
ServerToolCaller = object { tool_id, type } Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
ServerToolCaller20260120 = object { tool_id, type }
ServerToolUseBlock = object { id, caller, input, 2 more }
caller: DirectCaller { type } or ServerToolCaller { tool_id, type } or ServerToolCaller20260120 { tool_id, type } Tool invocation directly from the model.
Tool invocation directly from the model.
DirectCaller = object { type } Tool invocation directly from the model.
Tool invocation directly from the model.
ServerToolCaller = object { tool_id, type } Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
ServerToolCaller20260120 = object { tool_id, type }
name: "web_search" or "web_fetch" or "code_execution" or 4 more
WebSearchToolResultBlock = object { caller, content, tool_use_id, type }
caller: DirectCaller { type } or ServerToolCaller { tool_id, type } or ServerToolCaller20260120 { tool_id, type } Tool invocation directly from the model.
Tool invocation directly from the model.
DirectCaller = object { type } Tool invocation directly from the model.
Tool invocation directly from the model.
ServerToolCaller = object { tool_id, type } Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
ServerToolCaller20260120 = object { tool_id, type }
content: WebSearchToolResultBlockContent
WebSearchToolResultError = object { error_code, type }
error_code: WebSearchToolResultErrorCode
UnionMember1 = array of WebSearchResultBlock { encrypted_content, page_age, title, 2 more }
WebFetchToolResultBlock = object { caller, content, tool_use_id, type }
caller: DirectCaller { type } or ServerToolCaller { tool_id, type } or ServerToolCaller20260120 { tool_id, type } Tool invocation directly from the model.
Tool invocation directly from the model.
DirectCaller = object { type } Tool invocation directly from the model.
Tool invocation directly from the model.
ServerToolCaller = object { tool_id, type } Tool invocation generated by a server-side tool.
Tool invocation generated by a server-side tool.
ServerToolCaller20260120 = object { tool_id, type }
content: WebFetchToolResultErrorBlock { error_code, type } or WebFetchBlock { content, retrieved_at, type, url }
WebFetchToolResultErrorBlock = object { error_code, type }
error_code: WebFetchToolResultErrorCode
WebFetchBlock = object { content, retrieved_at, type, url }
content: DocumentBlock { citations, source, title, type }
citations: CitationsConfig { enabled } Citation configuration for the document
Citation configuration for the document
source: Base64PDFSource { data, media_type, type } or PlainTextSource { data, media_type, type }
Base64PDFSource = object { data, media_type, type }
PlainTextSource = object { data, media_type, type }
The title of the document
ISO 8601 timestamp when the content was retrieved
Fetched content URL
CodeExecutionToolResultBlock = object { content, tool_use_id, type }
content: CodeExecutionToolResultBlockContentCode execution result with encrypted stdout for PFC + web_search results.
Code execution result with encrypted stdout for PFC + web_search results.
CodeExecutionToolResultError = object { error_code, type }
error_code: CodeExecutionToolResultErrorCode
CodeExecutionResultBlock = object { content, return_code, stderr, 2 more }
content: array of CodeExecutionOutputBlock { file_id, type }
EncryptedCodeExecutionResultBlock = object { content, encrypted_stdout, return_code, 2 more } Code execution result with encrypted stdout for PFC + web_search results.
Code execution result with encrypted stdout for PFC + web_search results.
content: array of CodeExecutionOutputBlock { file_id, type }
BashCodeExecutionToolResultBlock = object { content, tool_use_id, type }
content: BashCodeExecutionToolResultError { error_code, type } or BashCodeExecutionResultBlock { content, return_code, stderr, 2 more }
BashCodeExecutionToolResultError = object { error_code, type }
error_code: BashCodeExecutionToolResultErrorCode
BashCodeExecutionResultBlock = object { content, return_code, stderr, 2 more }
content: array of BashCodeExecutionOutputBlock { file_id, type }
TextEditorCodeExecutionToolResultBlock = object { content, tool_use_id, type }
content: TextEditorCodeExecutionToolResultError { error_code, error_message, type } or TextEditorCodeExecutionViewResultBlock { content, file_type, num_lines, 3 more } or TextEditorCodeExecutionCreateResultBlock { is_file_update, type } or TextEditorCodeExecutionStrReplaceResultBlock { lines, new_lines, new_start, 3 more }
TextEditorCodeExecutionToolResultError = object { error_code, error_message, type }
error_code: TextEditorCodeExecutionToolResultErrorCode
TextEditorCodeExecutionViewResultBlock = object { content, file_type, num_lines, 3 more }
file_type: "text" or "image" or "pdf"
TextEditorCodeExecutionCreateResultBlock = object { is_file_update, type }
TextEditorCodeExecutionStrReplaceResultBlock = object { lines, new_lines, new_start, 3 more }
ToolSearchToolResultBlock = object { content, tool_use_id, type }
content: ToolSearchToolResultError { error_code, error_message, type } or ToolSearchToolSearchResultBlock { tool_references, type }
ToolSearchToolResultError = object { error_code, error_message, type }
error_code: ToolSearchToolResultErrorCode
ToolSearchToolSearchResultBlock = object { tool_references, type }
tool_references: array of ToolReferenceBlock { tool_name, type }
ContainerUploadBlock = object { file_id, type } Response model for a file uploaded to the container.
Response model for a file uploaded to the container.
model: ModelThe model that will complete your prompt.
The model that will complete your prompt.
See models for additional details and options.
UnionMember0 = "claude-opus-4-6" or "claude-sonnet-4-6" or "claude-haiku-4-5" or 12 moreThe model that will complete your prompt.
The model that will complete your prompt.
See models for additional details and options.
Most intelligent model for building agents and coding
Best combination of speed and intelligence
Fastest model with near-frontier intelligence
Fastest model with near-frontier intelligence
Premium model combining maximum intelligence with practical performance
Premium model combining maximum intelligence with practical performance
High-performance model for agents and coding
High-performance model for agents and coding
Exceptional model for specialized complex tasks
Exceptional model for specialized complex tasks
Powerful model for complex tasks
Powerful model for complex tasks
High-performance model with extended thinking
High-performance model with extended thinking
Fast and cost-effective model
role: "assistant"Conversational role of the generated message.
Conversational role of the generated message.
This will always be "assistant".
stop_reason: StopReasonThe reason that we stopped.
The reason that we stopped.
This may be one the following values:
"end_turn": the model reached a natural stopping point"max_tokens": we exceeded the requestedmax_tokensor the model's maximum"stop_sequence": one of your provided customstop_sequenceswas generated"tool_use": the model invoked one or more tools"pause_turn": we paused a long-running turn. You may provide the response back as-is in a subsequent request to let the model continue."refusal": when streaming classifiers intervene to handle potential policy violations
In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.
stop_sequence: stringWhich custom stop sequence was generated, if any.
Which custom stop sequence was generated, if any.
This value will be a non-null string if one of your custom stop sequences was generated.
type: "message"Object type.
Object type.
For Messages, this is always "message".
usage: Usage { cache_creation, cache_creation_input_tokens, cache_read_input_tokens, 5 more } Billing and rate-limit usage.
Billing and rate-limit usage.
Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.
Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.
For example, output_tokens will be non-zero, even for an empty string response from Claude.
Total input tokens in a request is the summation of input_tokens, cache_creation_input_tokens, and cache_read_input_tokens.
cache_creation: CacheCreation { ephemeral_1h_input_tokens, ephemeral_5m_input_tokens } Breakdown of cached tokens by TTL
Breakdown of cached tokens by TTL
The number of input tokens used to create the 1 hour cache entry.
The number of input tokens used to create the 5 minute cache entry.
The number of input tokens used to create the cache entry.
The number of input tokens read from the cache.
The geographic region where inference was performed for this request.
The number of input tokens which were used.
The number of output tokens which were used.
server_tool_use: ServerToolUsage { web_fetch_requests, web_search_requests } The number of server tool requests.
The number of server tool requests.
The number of web fetch tool requests.
The number of web search tool requests.
service_tier: "standard" or "priority" or "batch"If the request used the priority, standard, or batch tier.
If the request used the priority, standard, or batch tier.
MessageBatchErroredResult = object { error, type }
error: ErrorResponse { error, request_id, type }
error: ErrorObject
InvalidRequestError = object { message, type }
AuthenticationError = object { message, type }
BillingError = object { message, type }
PermissionError = object { message, type }
NotFoundError = object { message, type }
RateLimitError = object { message, type }
GatewayTimeoutError = object { message, type }
APIErrorObject = object { message, type }
OverloadedError = object { message, type }
MessageBatchCanceledResult = object { type }
MessageBatchExpiredResult = object { type }
curl https://api.anthropic.com/v1/messages/batches/$MESSAGE_BATCH_ID/results \
-H 'anthropic-version: 2023-06-01' \
-H "X-Api-Key: $ANTHROPIC_API_KEY"