Loading...
    • Developer Guide
    • API Reference
    • MCP
    • Resources
    • Release Notes
    Search...
    ⌘K
    Using the API
    Features overviewClient SDKsBeta headersErrors
    Messages
    Create a Message
    Count tokens in a Message
    Create a Message Batch
    Retrieve a Message Batch
    List Message Batches
    Cancel a Message Batch
    Delete a Message Batch
    Retrieve Message Batch results
    Models
    List Models
    Get a Model
    Beta
    Admin
    Completions
    Create a Text Completion
    Support & configuration
    Rate limitsService tiersVersionsIP addressesSupported regionsOpenAI SDK compatibility
    Console
    Log in

    Retrieve Message Batch results

    MessageBatchIndividualResponse messages().batches().resultsStreaming(BatchResultsParamsparams = BatchResultsParams.none(), RequestOptionsrequestOptions = RequestOptions.none())
    get/v1/messages/batches/{message_batch_id}/results

    Streams the results of a Message Batch as a .jsonl file.

    Each line in the file is a JSON object containing the result of a single request in the Message Batch. Results are not guaranteed to be in the same order as requests. Use the custom_id field to match results to requests.

    Learn more about the Message Batches API in our user guide

    ParametersExpand Collapse
    BatchResultsParams params
    Optional<String> messageBatchId

    ID of the Message Batch.

    ReturnsExpand Collapse
    class MessageBatchIndividualResponse:

    This is a single line in the response .jsonl file and does not represent the response as a whole.

    String customId

    Developer-provided ID created for each request in a Message Batch. Useful for matching results to requests, as results may be given out of request order.

    Must be unique for each request within the Message Batch.

    MessageBatchResult result

    Processing result for this request.

    Contains a Message output if processing was successful, an error response if processing failed, or the reason why processing was not attempted, such as cancellation or expiration.

    Accepts one of the following:
    class MessageBatchSucceededResult:
    Message message
    String id

    Unique object identifier.

    The format and length of IDs may change over time.

    List<ContentBlock> content

    Content generated by the model.

    This is an array of content blocks, each of which has a type that determines its shape.

    Example:

    [{"type": "text", "text": "Hi, I'm Claude."}]
    

    If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

    For example, if the input messages were:

    [
      {"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
      {"role": "assistant", "content": "The best answer is ("}
    ]
    

    Then the response content might be:

    [{"type": "text", "text": "B)"}]
    
    Accepts one of the following:
    class TextBlock:
    Optional<List<TextCitation>> citations

    Citations supporting the text block.

    The type of citation returned will depend on the type of document being cited. Citing a PDF results in page_location, plain text results in char_location, and content document results in content_block_location.

    Accepts one of the following:
    class CitationCharLocation:
    String citedText
    long documentIndex
    Optional<String> documentTitle
    long endCharIndex
    Optional<String> fileId
    long startCharIndex
    JsonValue; type "char_location"constant"char_location"constant
    Accepts one of the following:
    CHAR_LOCATION("char_location")
    class CitationPageLocation:
    String citedText
    long documentIndex
    Optional<String> documentTitle
    long endPageNumber
    Optional<String> fileId
    long startPageNumber
    JsonValue; type "page_location"constant"page_location"constant
    Accepts one of the following:
    PAGE_LOCATION("page_location")
    class CitationContentBlockLocation:
    String citedText
    long documentIndex
    Optional<String> documentTitle
    long endBlockIndex
    Optional<String> fileId
    long startBlockIndex
    JsonValue; type "content_block_location"constant"content_block_location"constant
    Accepts one of the following:
    CONTENT_BLOCK_LOCATION("content_block_location")
    class CitationsWebSearchResultLocation:
    String citedText
    String encryptedIndex
    Optional<String> title
    JsonValue; type "web_search_result_location"constant"web_search_result_location"constant
    Accepts one of the following:
    WEB_SEARCH_RESULT_LOCATION("web_search_result_location")
    String url
    class CitationsSearchResultLocation:
    String citedText
    long endBlockIndex
    long searchResultIndex
    String source
    long startBlockIndex
    Optional<String> title
    JsonValue; type "search_result_location"constant"search_result_location"constant
    Accepts one of the following:
    SEARCH_RESULT_LOCATION("search_result_location")
    String text
    JsonValue; type "text"constant"text"constant
    Accepts one of the following:
    TEXT("text")
    class ThinkingBlock:
    String signature
    String thinking
    JsonValue; type "thinking"constant"thinking"constant
    Accepts one of the following:
    THINKING("thinking")
    class RedactedThinkingBlock:
    String data
    JsonValue; type "redacted_thinking"constant"redacted_thinking"constant
    Accepts one of the following:
    REDACTED_THINKING("redacted_thinking")
    class ToolUseBlock:
    String id
    Input input
    String name
    JsonValue; type "tool_use"constant"tool_use"constant
    Accepts one of the following:
    TOOL_USE("tool_use")
    class ServerToolUseBlock:
    String id
    Input input
    JsonValue; name "web_search"constant"web_search"constant
    Accepts one of the following:
    WEB_SEARCH("web_search")
    JsonValue; type "server_tool_use"constant"server_tool_use"constant
    Accepts one of the following:
    SERVER_TOOL_USE("server_tool_use")
    class WebSearchToolResultBlock:
    WebSearchToolResultBlockContent content
    Accepts one of the following:
    class WebSearchToolResultError:
    ErrorCode errorCode
    Accepts one of the following:
    INVALID_TOOL_INPUT("invalid_tool_input")
    UNAVAILABLE("unavailable")
    MAX_USES_EXCEEDED("max_uses_exceeded")
    TOO_MANY_REQUESTS("too_many_requests")
    QUERY_TOO_LONG("query_too_long")
    JsonValue; type "web_search_tool_result_error"constant"web_search_tool_result_error"constant
    Accepts one of the following:
    WEB_SEARCH_TOOL_RESULT_ERROR("web_search_tool_result_error")
    List<WebSearchResultBlock>
    String encryptedContent
    Optional<String> pageAge
    String title
    JsonValue; type "web_search_result"constant"web_search_result"constant
    Accepts one of the following:
    WEB_SEARCH_RESULT("web_search_result")
    String url
    String toolUseId
    JsonValue; type "web_search_tool_result"constant"web_search_tool_result"constant
    Accepts one of the following:
    WEB_SEARCH_TOOL_RESULT("web_search_tool_result")
    Model model

    The model that will complete your prompt.

    See models for additional details and options.

    Accepts one of the following:
    CLAUDE_OPUS_4_5_20251101("claude-opus-4-5-20251101")

    Premium model combining maximum intelligence with practical performance

    CLAUDE_OPUS_4_5("claude-opus-4-5")

    Premium model combining maximum intelligence with practical performance

    CLAUDE_3_7_SONNET_LATEST("claude-3-7-sonnet-latest")

    High-performance model with early extended thinking

    CLAUDE_3_7_SONNET_20250219("claude-3-7-sonnet-20250219")

    High-performance model with early extended thinking

    CLAUDE_3_5_HAIKU_LATEST("claude-3-5-haiku-latest")

    Fastest and most compact model for near-instant responsiveness

    CLAUDE_3_5_HAIKU_20241022("claude-3-5-haiku-20241022")

    Our fastest model

    CLAUDE_HAIKU_4_5("claude-haiku-4-5")

    Hybrid model, capable of near-instant responses and extended thinking

    CLAUDE_HAIKU_4_5_20251001("claude-haiku-4-5-20251001")

    Hybrid model, capable of near-instant responses and extended thinking

    CLAUDE_SONNET_4_20250514("claude-sonnet-4-20250514")

    High-performance model with extended thinking

    CLAUDE_SONNET_4_0("claude-sonnet-4-0")

    High-performance model with extended thinking

    CLAUDE_4_SONNET_20250514("claude-4-sonnet-20250514")

    High-performance model with extended thinking

    CLAUDE_SONNET_4_5("claude-sonnet-4-5")

    Our best model for real-world agents and coding

    CLAUDE_SONNET_4_5_20250929("claude-sonnet-4-5-20250929")

    Our best model for real-world agents and coding

    CLAUDE_OPUS_4_0("claude-opus-4-0")

    Our most capable model

    CLAUDE_OPUS_4_20250514("claude-opus-4-20250514")

    Our most capable model

    CLAUDE_4_OPUS_20250514("claude-4-opus-20250514")

    Our most capable model

    CLAUDE_OPUS_4_1_20250805("claude-opus-4-1-20250805")

    Our most capable model

    CLAUDE_3_OPUS_LATEST("claude-3-opus-latest")

    Excels at writing and complex tasks

    CLAUDE_3_OPUS_20240229("claude-3-opus-20240229")

    Excels at writing and complex tasks

    CLAUDE_3_HAIKU_20240307("claude-3-haiku-20240307")

    Our previous most fast and cost-effective

    JsonValue; role "assistant"constant"assistant"constant

    Conversational role of the generated message.

    This will always be "assistant".

    Accepts one of the following:
    ASSISTANT("assistant")
    Optional<StopReason> stopReason

    The reason that we stopped.

    This may be one the following values:

    • "end_turn": the model reached a natural stopping point
    • "max_tokens": we exceeded the requested max_tokens or the model's maximum
    • "stop_sequence": one of your provided custom stop_sequences was generated
    • "tool_use": the model invoked one or more tools
    • "pause_turn": we paused a long-running turn. You may provide the response back as-is in a subsequent request to let the model continue.
    • "refusal": when streaming classifiers intervene to handle potential policy violations

    In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.

    Accepts one of the following:
    END_TURN("end_turn")
    MAX_TOKENS("max_tokens")
    STOP_SEQUENCE("stop_sequence")
    TOOL_USE("tool_use")
    PAUSE_TURN("pause_turn")
    REFUSAL("refusal")
    Optional<String> stopSequence

    Which custom stop sequence was generated, if any.

    This value will be a non-null string if one of your custom stop sequences was generated.

    JsonValue; type "message"constant"message"constant

    Object type.

    For Messages, this is always "message".

    Accepts one of the following:
    MESSAGE("message")
    Usage usage

    Billing and rate-limit usage.

    Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.

    Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.

    For example, output_tokens will be non-zero, even for an empty string response from Claude.

    Total input tokens in a request is the summation of input_tokens, cache_creation_input_tokens, and cache_read_input_tokens.

    Optional<CacheCreation> cacheCreation

    Breakdown of cached tokens by TTL

    long ephemeral1hInputTokens

    The number of input tokens used to create the 1 hour cache entry.

    minimum0
    long ephemeral5mInputTokens

    The number of input tokens used to create the 5 minute cache entry.

    minimum0
    Optional<Long> cacheCreationInputTokens

    The number of input tokens used to create the cache entry.

    minimum0
    Optional<Long> cacheReadInputTokens

    The number of input tokens read from the cache.

    minimum0
    long inputTokens

    The number of input tokens which were used.

    minimum0
    long outputTokens

    The number of output tokens which were used.

    minimum0
    Optional<ServerToolUsage> serverToolUse

    The number of server tool requests.

    long webSearchRequests

    The number of web search tool requests.

    minimum0
    Optional<ServiceTier> serviceTier

    If the request used the priority, standard, or batch tier.

    Accepts one of the following:
    STANDARD("standard")
    PRIORITY("priority")
    BATCH("batch")
    JsonValue; type "succeeded"constant"succeeded"constant
    Accepts one of the following:
    SUCCEEDED("succeeded")
    class MessageBatchErroredResult:
    ErrorResponse error
    ErrorObject error
    Accepts one of the following:
    class InvalidRequestError:
    String message
    JsonValue; type "invalid_request_error"constant"invalid_request_error"constant
    Accepts one of the following:
    INVALID_REQUEST_ERROR("invalid_request_error")
    class AuthenticationError:
    String message
    JsonValue; type "authentication_error"constant"authentication_error"constant
    Accepts one of the following:
    AUTHENTICATION_ERROR("authentication_error")
    class BillingError:
    String message
    JsonValue; type "billing_error"constant"billing_error"constant
    Accepts one of the following:
    BILLING_ERROR("billing_error")
    class PermissionError:
    String message
    JsonValue; type "permission_error"constant"permission_error"constant
    Accepts one of the following:
    PERMISSION_ERROR("permission_error")
    class NotFoundError:
    String message
    JsonValue; type "not_found_error"constant"not_found_error"constant
    Accepts one of the following:
    NOT_FOUND_ERROR("not_found_error")
    class RateLimitError:
    String message
    JsonValue; type "rate_limit_error"constant"rate_limit_error"constant
    Accepts one of the following:
    RATE_LIMIT_ERROR("rate_limit_error")
    class GatewayTimeoutError:
    String message
    JsonValue; type "timeout_error"constant"timeout_error"constant
    Accepts one of the following:
    TIMEOUT_ERROR("timeout_error")
    class ApiErrorObject:
    String message
    JsonValue; type "api_error"constant"api_error"constant
    Accepts one of the following:
    API_ERROR("api_error")
    class OverloadedError:
    String message
    JsonValue; type "overloaded_error"constant"overloaded_error"constant
    Accepts one of the following:
    OVERLOADED_ERROR("overloaded_error")
    Optional<String> requestId
    JsonValue; type "error"constant"error"constant
    Accepts one of the following:
    ERROR("error")
    JsonValue; type "errored"constant"errored"constant
    Accepts one of the following:
    ERRORED("errored")
    class MessageBatchCanceledResult:
    JsonValue; type "canceled"constant"canceled"constant
    Accepts one of the following:
    CANCELED("canceled")
    class MessageBatchExpiredResult:
    JsonValue; type "expired"constant"expired"constant
    Accepts one of the following:
    EXPIRED("expired")
    Retrieve Message Batch results
    package com.anthropic.example;
    
    import com.anthropic.client.AnthropicClient;
    import com.anthropic.client.okhttp.AnthropicOkHttpClient;
    import com.anthropic.core.http.StreamResponse;
    import com.anthropic.models.messages.batches.BatchResultsParams;
    import com.anthropic.models.messages.batches.MessageBatchIndividualResponse;
    
    public final class Main {
        private Main() {}
    
        public static void main(String[] args) {
            AnthropicClient client = AnthropicOkHttpClient.fromEnv();
    
            StreamResponse<MessageBatchIndividualResponse> messageBatchIndividualResponse = client.messages().batches().resultsStreaming("message_batch_id");
        }
    }
    Returns Examples

    Solutions

    • AI agents
    • Code modernization
    • Coding
    • Customer support
    • Education
    • Financial services
    • Government
    • Life sciences

    Partners

    • Amazon Bedrock
    • Google Cloud's Vertex AI

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Company

    • Anthropic
    • Careers
    • Economic Futures
    • Research
    • News
    • Responsible Scaling Policy
    • Security and compliance
    • Transparency

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Help and security

    • Availability
    • Status
    • Support
    • Discord

    Terms and policies

    • Privacy policy
    • Responsible disclosure policy
    • Terms of service: Commercial
    • Terms of service: Consumer
    • Usage policy