Retrieve Message Batch results
Streams the results of a Message Batch as a .jsonl file.
Each line in the file is a JSON object containing the result of a single request in the Message Batch. Results are not guaranteed to be in the same order as requests. Use the custom_id field to match results to requests.
Learn more about the Message Batches API in our user guide
ParametersExpand Collapse
messageBatchID string
ID of the Message Batch.
ReturnsExpand Collapse
type MessageBatchIndividualResponse struct{…}
This is a single line in the response .jsonl file and does not represent the response as a whole.
CustomID string
Developer-provided ID created for each request in a Message Batch. Useful for matching results to requests, as results may be given out of request order.
Must be unique for each request within the Message Batch.
Result MessageBatchResultUnion
Processing result for this request.
Contains a Message output if processing was successful, an error response if processing failed, or the reason why processing was not attempted, such as cancellation or expiration.
type MessageBatchSucceededResult struct{…}
Message Message
ID string
Unique object identifier.
The format and length of IDs may change over time.
Content []ContentBlockUnion
Content generated by the model.
This is an array of content blocks, each of which has a type that determines its shape.
Example:
[{"type": "text", "text": "Hi, I'm Claude."}]
If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.
For example, if the input messages were:
[
{"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
{"role": "assistant", "content": "The best answer is ("}
]
Then the response content might be:
[{"type": "text", "text": "B)"}]
type TextBlock struct{…}
Citations []TextCitationUnion
Citations supporting the text block.
The type of citation returned will depend on the type of document being cited. Citing a PDF results in page_location, plain text results in char_location, and content document results in content_block_location.
type CitationCharLocation struct{…}
Type CharLocation
type CitationPageLocation struct{…}
Type PageLocation
type CitationContentBlockLocation struct{…}
Type ContentBlockLocation
type CitationsWebSearchResultLocation struct{…}
Type WebSearchResultLocation
type CitationsSearchResultLocation struct{…}
Type SearchResultLocation
Type Text
type ThinkingBlock struct{…}
Type Thinking
type RedactedThinkingBlock struct{…}
Type RedactedThinking
type ToolUseBlock struct{…}
Type ToolUse
type ServerToolUseBlock struct{…}
Name WebSearch
Type ServerToolUse
type WebSearchToolResultBlock struct{…}
type WebSearchToolResultError struct{…}
ErrorCode WebSearchToolResultErrorErrorCode
Type WebSearchToolResultError
Type WebSearchResult
Type WebSearchToolResult
Model Model
The model that will complete your prompt.
See models for additional details and options.
type Model string
The model that will complete your prompt.
See models for additional details and options.
const ModelClaudeOpus4_5_20251101 Model = "claude-opus-4-5-20251101"
Premium model combining maximum intelligence with practical performance
const ModelClaudeOpus4_5 Model = "claude-opus-4-5"
Premium model combining maximum intelligence with practical performance
const ModelClaude3_7SonnetLatest Model = "claude-3-7-sonnet-latest"
High-performance model with early extended thinking
const ModelClaude3_7Sonnet20250219 Model = "claude-3-7-sonnet-20250219"
High-performance model with early extended thinking
const ModelClaude3_5HaikuLatest Model = "claude-3-5-haiku-latest"
Fastest and most compact model for near-instant responsiveness
const ModelClaude3_5Haiku20241022 Model = "claude-3-5-haiku-20241022"
Our fastest model
const ModelClaudeHaiku4_5 Model = "claude-haiku-4-5"
Hybrid model, capable of near-instant responses and extended thinking
const ModelClaudeHaiku4_5_20251001 Model = "claude-haiku-4-5-20251001"
Hybrid model, capable of near-instant responses and extended thinking
const ModelClaudeSonnet4_20250514 Model = "claude-sonnet-4-20250514"
High-performance model with extended thinking
const ModelClaudeSonnet4_0 Model = "claude-sonnet-4-0"
High-performance model with extended thinking
const ModelClaude4Sonnet20250514 Model = "claude-4-sonnet-20250514"
High-performance model with extended thinking
const ModelClaudeSonnet4_5 Model = "claude-sonnet-4-5"
Our best model for real-world agents and coding
const ModelClaudeSonnet4_5_20250929 Model = "claude-sonnet-4-5-20250929"
Our best model for real-world agents and coding
const ModelClaudeOpus4_0 Model = "claude-opus-4-0"
Our most capable model
const ModelClaudeOpus4_20250514 Model = "claude-opus-4-20250514"
Our most capable model
const ModelClaude4Opus20250514 Model = "claude-4-opus-20250514"
Our most capable model
const ModelClaudeOpus4_1_20250805 Model = "claude-opus-4-1-20250805"
Our most capable model
const ModelClaude3OpusLatest Model = "claude-3-opus-latest"
Excels at writing and complex tasks
const ModelClaude_3_Opus_20240229 Model = "claude-3-opus-20240229"
Excels at writing and complex tasks
const ModelClaude_3_Haiku_20240307 Model = "claude-3-haiku-20240307"
Our previous most fast and cost-effective
Role Assistant
Conversational role of the generated message.
This will always be "assistant".
StopReason StopReason
The reason that we stopped.
This may be one the following values:
"end_turn": the model reached a natural stopping point"max_tokens": we exceeded the requestedmax_tokensor the model's maximum"stop_sequence": one of your provided customstop_sequenceswas generated"tool_use": the model invoked one or more tools"pause_turn": we paused a long-running turn. You may provide the response back as-is in a subsequent request to let the model continue."refusal": when streaming classifiers intervene to handle potential policy violations
In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.
StopSequence string
Which custom stop sequence was generated, if any.
This value will be a non-null string if one of your custom stop sequences was generated.
Type Message
Object type.
For Messages, this is always "message".
Usage Usage
Billing and rate-limit usage.
Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.
Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.
For example, output_tokens will be non-zero, even for an empty string response from Claude.
Total input tokens in a request is the summation of input_tokens, cache_creation_input_tokens, and cache_read_input_tokens.
CacheCreation CacheCreation
Breakdown of cached tokens by TTL
Ephemeral1hInputTokens int64
The number of input tokens used to create the 1 hour cache entry.
Ephemeral5mInputTokens int64
The number of input tokens used to create the 5 minute cache entry.
CacheCreationInputTokens int64
The number of input tokens used to create the cache entry.
CacheReadInputTokens int64
The number of input tokens read from the cache.
InputTokens int64
The number of input tokens which were used.
OutputTokens int64
The number of output tokens which were used.
ServerToolUse ServerToolUsage
The number of server tool requests.
WebSearchRequests int64
The number of web search tool requests.
ServiceTier UsageServiceTier
If the request used the priority, standard, or batch tier.
Type Succeeded
type MessageBatchErroredResult struct{…}
Error ErrorResponse
Error ErrorObjectUnion
type InvalidRequestError struct{…}
Type InvalidRequestError
type AuthenticationError struct{…}
Type AuthenticationError
type BillingError struct{…}
Type BillingError
type PermissionError struct{…}
Type PermissionError
type NotFoundError struct{…}
Type NotFoundError
type RateLimitError struct{…}
Type RateLimitError
type GatewayTimeoutError struct{…}
Type TimeoutError
type APIErrorObject struct{…}
Type APIError
type OverloadedError struct{…}
Type OverloadedError
Type Error
Type Errored
type MessageBatchCanceledResult struct{…}
Type Canceled
type MessageBatchExpiredResult struct{…}
Type Expired
package main
import (
"context"
"fmt"
"github.com/anthropics/anthropic-sdk-go"
"github.com/anthropics/anthropic-sdk-go/option"
)
func main() {
client := anthropic.NewClient(
option.WithAPIKey("my-anthropic-api-key"),
)
stream := client.Messages.Batches.ResultsStreaming(context.TODO(), "message_batch_id")
if stream.Err() != nil {
panic(err.Error())
}
fmt.Printf("%+v\n", messageBatchIndividualResponse.CustomID)
}