Loading...
    • Developer Guide
    • API Reference
    • MCP
    • Resources
    • Release Notes
    Search...
    ⌘K
    Using the API
    Features overviewClient SDKsBeta headersErrors
    Messages
    Create a Message
    Count tokens in a Message
    Create a Message Batch
    Retrieve a Message Batch
    List Message Batches
    Cancel a Message Batch
    Delete a Message Batch
    Retrieve Message Batch results
    Models
    List Models
    Get a Model
    Beta
    Admin
    Completions
    Create a Text Completion
    Support & configuration
    Rate limitsService tiersVersionsIP addressesSupported regionsOpenAI SDK compatibility
    Console
    Log in

    Retrieve Message Batch results

    client.Messages.Batches.Results(ctx, messageBatchID) (*MessageBatchIndividualResponse, error)
    get/v1/messages/batches/{message_batch_id}/results

    Streams the results of a Message Batch as a .jsonl file.

    Each line in the file is a JSON object containing the result of a single request in the Message Batch. Results are not guaranteed to be in the same order as requests. Use the custom_id field to match results to requests.

    Learn more about the Message Batches API in our user guide

    ParametersExpand Collapse
    messageBatchID string

    ID of the Message Batch.

    ReturnsExpand Collapse
    type MessageBatchIndividualResponse struct{…}

    This is a single line in the response .jsonl file and does not represent the response as a whole.

    CustomID string

    Developer-provided ID created for each request in a Message Batch. Useful for matching results to requests, as results may be given out of request order.

    Must be unique for each request within the Message Batch.

    Result MessageBatchResultUnion

    Processing result for this request.

    Contains a Message output if processing was successful, an error response if processing failed, or the reason why processing was not attempted, such as cancellation or expiration.

    Accepts one of the following:
    type MessageBatchSucceededResult struct{…}
    Message Message
    ID string

    Unique object identifier.

    The format and length of IDs may change over time.

    Content []ContentBlockUnion

    Content generated by the model.

    This is an array of content blocks, each of which has a type that determines its shape.

    Example:

    [{"type": "text", "text": "Hi, I'm Claude."}]
    

    If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

    For example, if the input messages were:

    [
      {"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
      {"role": "assistant", "content": "The best answer is ("}
    ]
    

    Then the response content might be:

    [{"type": "text", "text": "B)"}]
    
    Accepts one of the following:
    type TextBlock struct{…}
    Citations []TextCitationUnion

    Citations supporting the text block.

    The type of citation returned will depend on the type of document being cited. Citing a PDF results in page_location, plain text results in char_location, and content document results in content_block_location.

    Accepts one of the following:
    type CitationCharLocation struct{…}
    CitedText string
    DocumentIndex int64
    DocumentTitle string
    EndCharIndex int64
    FileID string
    StartCharIndex int64
    Type CharLocation
    Accepts one of the following:
    const CharLocationCharLocation CharLocation = "char_location"
    type CitationPageLocation struct{…}
    CitedText string
    DocumentIndex int64
    DocumentTitle string
    EndPageNumber int64
    FileID string
    StartPageNumber int64
    Type PageLocation
    Accepts one of the following:
    const PageLocationPageLocation PageLocation = "page_location"
    type CitationContentBlockLocation struct{…}
    CitedText string
    DocumentIndex int64
    DocumentTitle string
    EndBlockIndex int64
    FileID string
    StartBlockIndex int64
    Type ContentBlockLocation
    Accepts one of the following:
    const ContentBlockLocationContentBlockLocation ContentBlockLocation = "content_block_location"
    type CitationsWebSearchResultLocation struct{…}
    CitedText string
    EncryptedIndex string
    Title string
    Type WebSearchResultLocation
    Accepts one of the following:
    const WebSearchResultLocationWebSearchResultLocation WebSearchResultLocation = "web_search_result_location"
    URL string
    type CitationsSearchResultLocation struct{…}
    CitedText string
    EndBlockIndex int64
    SearchResultIndex int64
    Source string
    StartBlockIndex int64
    Title string
    Type SearchResultLocation
    Accepts one of the following:
    const SearchResultLocationSearchResultLocation SearchResultLocation = "search_result_location"
    Text string
    Type Text
    Accepts one of the following:
    const TextText Text = "text"
    type ThinkingBlock struct{…}
    Signature string
    Thinking string
    Type Thinking
    Accepts one of the following:
    const ThinkingThinking Thinking = "thinking"
    type RedactedThinkingBlock struct{…}
    Data string
    Type RedactedThinking
    Accepts one of the following:
    const RedactedThinkingRedactedThinking RedactedThinking = "redacted_thinking"
    type ToolUseBlock struct{…}
    ID string
    Input map[string, any]
    Name string
    Type ToolUse
    Accepts one of the following:
    const ToolUseToolUse ToolUse = "tool_use"
    type ServerToolUseBlock struct{…}
    ID string
    Input map[string, any]
    Name WebSearch
    Accepts one of the following:
    const WebSearchWebSearch WebSearch = "web_search"
    Type ServerToolUse
    Accepts one of the following:
    const ServerToolUseServerToolUse ServerToolUse = "server_tool_use"
    type WebSearchToolResultBlock struct{…}
    Content WebSearchToolResultBlockContentUnion
    Accepts one of the following:
    type WebSearchToolResultError struct{…}
    ErrorCode WebSearchToolResultErrorErrorCode
    Accepts one of the following:
    const WebSearchToolResultErrorErrorCodeInvalidToolInput WebSearchToolResultErrorErrorCode = "invalid_tool_input"
    const WebSearchToolResultErrorErrorCodeUnavailable WebSearchToolResultErrorErrorCode = "unavailable"
    const WebSearchToolResultErrorErrorCodeMaxUsesExceeded WebSearchToolResultErrorErrorCode = "max_uses_exceeded"
    const WebSearchToolResultErrorErrorCodeTooManyRequests WebSearchToolResultErrorErrorCode = "too_many_requests"
    const WebSearchToolResultErrorErrorCodeQueryTooLong WebSearchToolResultErrorErrorCode = "query_too_long"
    Type WebSearchToolResultError
    Accepts one of the following:
    const WebSearchToolResultErrorWebSearchToolResultError WebSearchToolResultError = "web_search_tool_result_error"
    type WebSearchToolResultBlockContentArray []WebSearchResultBlock
    EncryptedContent string
    PageAge string
    Title string
    Type WebSearchResult
    Accepts one of the following:
    const WebSearchResultWebSearchResult WebSearchResult = "web_search_result"
    URL string
    ToolUseID string
    Type WebSearchToolResult
    Accepts one of the following:
    const WebSearchToolResultWebSearchToolResult WebSearchToolResult = "web_search_tool_result"
    Model Model

    The model that will complete your prompt.

    See models for additional details and options.

    Accepts one of the following:
    type Model string

    The model that will complete your prompt.

    See models for additional details and options.

    Accepts one of the following:
    const ModelClaudeOpus4_5_20251101 Model = "claude-opus-4-5-20251101"

    Premium model combining maximum intelligence with practical performance

    const ModelClaudeOpus4_5 Model = "claude-opus-4-5"

    Premium model combining maximum intelligence with practical performance

    const ModelClaude3_7SonnetLatest Model = "claude-3-7-sonnet-latest"

    High-performance model with early extended thinking

    const ModelClaude3_7Sonnet20250219 Model = "claude-3-7-sonnet-20250219"

    High-performance model with early extended thinking

    const ModelClaude3_5HaikuLatest Model = "claude-3-5-haiku-latest"

    Fastest and most compact model for near-instant responsiveness

    const ModelClaude3_5Haiku20241022 Model = "claude-3-5-haiku-20241022"

    Our fastest model

    const ModelClaudeHaiku4_5 Model = "claude-haiku-4-5"

    Hybrid model, capable of near-instant responses and extended thinking

    const ModelClaudeHaiku4_5_20251001 Model = "claude-haiku-4-5-20251001"

    Hybrid model, capable of near-instant responses and extended thinking

    const ModelClaudeSonnet4_20250514 Model = "claude-sonnet-4-20250514"

    High-performance model with extended thinking

    const ModelClaudeSonnet4_0 Model = "claude-sonnet-4-0"

    High-performance model with extended thinking

    const ModelClaude4Sonnet20250514 Model = "claude-4-sonnet-20250514"

    High-performance model with extended thinking

    const ModelClaudeSonnet4_5 Model = "claude-sonnet-4-5"

    Our best model for real-world agents and coding

    const ModelClaudeSonnet4_5_20250929 Model = "claude-sonnet-4-5-20250929"

    Our best model for real-world agents and coding

    const ModelClaudeOpus4_0 Model = "claude-opus-4-0"

    Our most capable model

    const ModelClaudeOpus4_20250514 Model = "claude-opus-4-20250514"

    Our most capable model

    const ModelClaude4Opus20250514 Model = "claude-4-opus-20250514"

    Our most capable model

    const ModelClaudeOpus4_1_20250805 Model = "claude-opus-4-1-20250805"

    Our most capable model

    const ModelClaude3OpusLatest Model = "claude-3-opus-latest"

    Excels at writing and complex tasks

    const ModelClaude_3_Opus_20240229 Model = "claude-3-opus-20240229"

    Excels at writing and complex tasks

    const ModelClaude_3_Haiku_20240307 Model = "claude-3-haiku-20240307"

    Our previous most fast and cost-effective

    string
    Role Assistant

    Conversational role of the generated message.

    This will always be "assistant".

    Accepts one of the following:
    const AssistantAssistant Assistant = "assistant"
    StopReason StopReason

    The reason that we stopped.

    This may be one the following values:

    • "end_turn": the model reached a natural stopping point
    • "max_tokens": we exceeded the requested max_tokens or the model's maximum
    • "stop_sequence": one of your provided custom stop_sequences was generated
    • "tool_use": the model invoked one or more tools
    • "pause_turn": we paused a long-running turn. You may provide the response back as-is in a subsequent request to let the model continue.
    • "refusal": when streaming classifiers intervene to handle potential policy violations

    In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.

    Accepts one of the following:
    const StopReasonEndTurn StopReason = "end_turn"
    const StopReasonMaxTokens StopReason = "max_tokens"
    const StopReasonStopSequence StopReason = "stop_sequence"
    const StopReasonToolUse StopReason = "tool_use"
    const StopReasonPauseTurn StopReason = "pause_turn"
    const StopReasonRefusal StopReason = "refusal"
    StopSequence string

    Which custom stop sequence was generated, if any.

    This value will be a non-null string if one of your custom stop sequences was generated.

    Type Message

    Object type.

    For Messages, this is always "message".

    Accepts one of the following:
    const MessageMessage Message = "message"
    Usage Usage

    Billing and rate-limit usage.

    Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.

    Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.

    For example, output_tokens will be non-zero, even for an empty string response from Claude.

    Total input tokens in a request is the summation of input_tokens, cache_creation_input_tokens, and cache_read_input_tokens.

    CacheCreation CacheCreation

    Breakdown of cached tokens by TTL

    Ephemeral1hInputTokens int64

    The number of input tokens used to create the 1 hour cache entry.

    minimum0
    Ephemeral5mInputTokens int64

    The number of input tokens used to create the 5 minute cache entry.

    minimum0
    CacheCreationInputTokens int64

    The number of input tokens used to create the cache entry.

    minimum0
    CacheReadInputTokens int64

    The number of input tokens read from the cache.

    minimum0
    InputTokens int64

    The number of input tokens which were used.

    minimum0
    OutputTokens int64

    The number of output tokens which were used.

    minimum0
    ServerToolUse ServerToolUsage

    The number of server tool requests.

    WebSearchRequests int64

    The number of web search tool requests.

    minimum0
    ServiceTier UsageServiceTier

    If the request used the priority, standard, or batch tier.

    Accepts one of the following:
    const UsageServiceTierStandard UsageServiceTier = "standard"
    const UsageServiceTierPriority UsageServiceTier = "priority"
    const UsageServiceTierBatch UsageServiceTier = "batch"
    Type Succeeded
    Accepts one of the following:
    const SucceededSucceeded Succeeded = "succeeded"
    type MessageBatchErroredResult struct{…}
    Error ErrorResponse
    Error ErrorObjectUnion
    Accepts one of the following:
    type InvalidRequestError struct{…}
    Message string
    Type InvalidRequestError
    Accepts one of the following:
    const InvalidRequestErrorInvalidRequestError InvalidRequestError = "invalid_request_error"
    type AuthenticationError struct{…}
    Message string
    Type AuthenticationError
    Accepts one of the following:
    const AuthenticationErrorAuthenticationError AuthenticationError = "authentication_error"
    type BillingError struct{…}
    Message string
    Type BillingError
    Accepts one of the following:
    const BillingErrorBillingError BillingError = "billing_error"
    type PermissionError struct{…}
    Message string
    Type PermissionError
    Accepts one of the following:
    const PermissionErrorPermissionError PermissionError = "permission_error"
    type NotFoundError struct{…}
    Message string
    Type NotFoundError
    Accepts one of the following:
    const NotFoundErrorNotFoundError NotFoundError = "not_found_error"
    type RateLimitError struct{…}
    Message string
    Type RateLimitError
    Accepts one of the following:
    const RateLimitErrorRateLimitError RateLimitError = "rate_limit_error"
    type GatewayTimeoutError struct{…}
    Message string
    Type TimeoutError
    Accepts one of the following:
    const TimeoutErrorTimeoutError TimeoutError = "timeout_error"
    type APIErrorObject struct{…}
    Message string
    Type APIError
    Accepts one of the following:
    const APIErrorAPIError APIError = "api_error"
    type OverloadedError struct{…}
    Message string
    Type OverloadedError
    Accepts one of the following:
    const OverloadedErrorOverloadedError OverloadedError = "overloaded_error"
    RequestID string
    Type Error
    Accepts one of the following:
    const ErrorError Error = "error"
    Type Errored
    Accepts one of the following:
    const ErroredErrored Errored = "errored"
    type MessageBatchCanceledResult struct{…}
    Type Canceled
    Accepts one of the following:
    const CanceledCanceled Canceled = "canceled"
    type MessageBatchExpiredResult struct{…}
    Type Expired
    Accepts one of the following:
    const ExpiredExpired Expired = "expired"
    Retrieve Message Batch results
    package main
    
    import (
      "context"
      "fmt"
    
      "github.com/anthropics/anthropic-sdk-go"
      "github.com/anthropics/anthropic-sdk-go/option"
    )
    
    func main() {
      client := anthropic.NewClient(
        option.WithAPIKey("my-anthropic-api-key"),
      )
      stream := client.Messages.Batches.ResultsStreaming(context.TODO(), "message_batch_id")
      if stream.Err() != nil {
        panic(err.Error())
      }
      fmt.Printf("%+v\n", messageBatchIndividualResponse.CustomID)
    }
    
    Returns Examples

    Solutions

    • AI agents
    • Code modernization
    • Coding
    • Customer support
    • Education
    • Financial services
    • Government
    • Life sciences

    Partners

    • Amazon Bedrock
    • Google Cloud's Vertex AI

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Company

    • Anthropic
    • Careers
    • Economic Futures
    • Research
    • News
    • Responsible Scaling Policy
    • Security and compliance
    • Transparency

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Help and security

    • Availability
    • Status
    • Support
    • Discord

    Terms and policies

    • Privacy policy
    • Responsible disclosure policy
    • Terms of service: Commercial
    • Terms of service: Consumer
    • Usage policy