Loading...
    • Developer Guide
    • API Reference
    • MCP
    • Resources
    • Release Notes
    Search...
    ⌘K
    Using the API
    Features overviewClient SDKsBeta headersErrors
    Messages
    Create a Message
    Count tokens in a Message
    Models
    List Models
    Get a Model
    Beta
    Create a Message
    Count tokens in a Message
    Create a Message Batch
    Retrieve a Message Batch
    List Message Batches
    Cancel a Message Batch
    Delete a Message Batch
    Retrieve Message Batch results
    Admin
    Completions
    Create a Text Completion
    Support & configuration
    Rate limitsService tiersVersionsIP addressesSupported regionsOpenAI SDK compatibility
    Console
    Log in

    Retrieve a Message Batch

    beta.messages.batches.retrieve(strmessage_batch_id, BatchRetrieveParams**kwargs) -> BetaMessageBatch
    get/v1/messages/batches/{message_batch_id}

    This endpoint is idempotent and can be used to poll for Message Batch completion. To access the results of a Message Batch, make a request to the results_url field in the response.

    Learn more about the Message Batches API in our user guide

    ParametersExpand Collapse
    message_batch_id: str

    ID of the Message Batch.

    betas: Optional[List[AnthropicBetaParam]]

    Optional header to specify the beta version(s) you want to use.

    Accepts one of the following:
    UnionMember0 = str
    UnionMember1 = Literal["message-batches-2024-09-24", "prompt-caching-2024-07-31", "computer-use-2024-10-22", 16 more]
    Accepts one of the following:
    "message-batches-2024-09-24"
    "prompt-caching-2024-07-31"
    "computer-use-2024-10-22"
    "computer-use-2025-01-24"
    "pdfs-2024-09-25"
    "token-counting-2024-11-01"
    "token-efficient-tools-2025-02-19"
    "output-128k-2025-02-19"
    "files-api-2025-04-14"
    "mcp-client-2025-04-04"
    "mcp-client-2025-11-20"
    "dev-full-thinking-2025-05-14"
    "interleaved-thinking-2025-05-14"
    "code-execution-2025-05-22"
    "extended-cache-ttl-2025-04-11"
    "context-1m-2025-08-07"
    "context-management-2025-06-27"
    "model-context-window-exceeded-2025-08-26"
    "skills-2025-10-02"
    ReturnsExpand Collapse
    class BetaMessageBatch: …
    id: str

    Unique object identifier.

    The format and length of IDs may change over time.

    archived_at: Optional[datetime]

    RFC 3339 datetime string representing the time at which the Message Batch was archived and its results became unavailable.

    formatdate-time
    cancel_initiated_at: Optional[datetime]

    RFC 3339 datetime string representing the time at which cancellation was initiated for the Message Batch. Specified only if cancellation was initiated.

    formatdate-time
    created_at: datetime

    RFC 3339 datetime string representing the time at which the Message Batch was created.

    formatdate-time
    ended_at: Optional[datetime]

    RFC 3339 datetime string representing the time at which processing for the Message Batch ended. Specified only once processing ends.

    Processing ends when every request in a Message Batch has either succeeded, errored, canceled, or expired.

    formatdate-time
    expires_at: datetime

    RFC 3339 datetime string representing the time at which the Message Batch will expire and end processing, which is 24 hours after creation.

    formatdate-time
    processing_status: Literal["in_progress", "canceling", "ended"]

    Processing status of the Message Batch.

    Accepts one of the following:
    "in_progress"
    "canceling"
    "ended"
    request_counts: BetaMessageBatchRequestCounts

    Tallies requests within the Message Batch, categorized by their status.

    Requests start as processing and move to one of the other statuses only once processing of the entire batch ends. The sum of all values always matches the total number of requests in the batch.

    canceled: int

    Number of requests in the Message Batch that have been canceled.

    This is zero until processing of the entire Message Batch has ended.

    errored: int

    Number of requests in the Message Batch that encountered an error.

    This is zero until processing of the entire Message Batch has ended.

    expired: int

    Number of requests in the Message Batch that have expired.

    This is zero until processing of the entire Message Batch has ended.

    processing: int

    Number of requests in the Message Batch that are processing.

    succeeded: int

    Number of requests in the Message Batch that have completed successfully.

    This is zero until processing of the entire Message Batch has ended.

    results_url: Optional[str]

    URL to a .jsonl file containing the results of the Message Batch requests. Specified only once processing ends.

    Results in the file are not guaranteed to be in the same order as requests. Use the custom_id field to match results to requests.

    type: Literal["message_batch"]

    Object type.

    For Message Batches, this is always "message_batch".

    Accepts one of the following:
    "message_batch"
    Retrieve a Message Batch
    import os
    from anthropic import Anthropic
    
    client = Anthropic(
        api_key=os.environ.get("ANTHROPIC_API_KEY"),  # This is the default and can be omitted
    )
    beta_message_batch = client.beta.messages.batches.retrieve(
        message_batch_id="message_batch_id",
    )
    print(beta_message_batch.id)
    Response 200
    {
      "id": "msgbatch_013Zva2CMHLNnXjNJJKqJ2EF",
      "archived_at": "2024-08-20T18:37:24.100435Z",
      "cancel_initiated_at": "2024-08-20T18:37:24.100435Z",
      "created_at": "2024-08-20T18:37:24.100435Z",
      "ended_at": "2024-08-20T18:37:24.100435Z",
      "expires_at": "2024-08-20T18:37:24.100435Z",
      "processing_status": "in_progress",
      "request_counts": {
        "canceled": 10,
        "errored": 30,
        "expired": 10,
        "processing": 100,
        "succeeded": 50
      },
      "results_url": "https://api.anthropic.com/v1/messages/batches/msgbatch_013Zva2CMHLNnXjNJJKqJ2EF/results",
      "type": "message_batch"
    }
    Returns Examples
    Response 200
    {
      "id": "msgbatch_013Zva2CMHLNnXjNJJKqJ2EF",
      "archived_at": "2024-08-20T18:37:24.100435Z",
      "cancel_initiated_at": "2024-08-20T18:37:24.100435Z",
      "created_at": "2024-08-20T18:37:24.100435Z",
      "ended_at": "2024-08-20T18:37:24.100435Z",
      "expires_at": "2024-08-20T18:37:24.100435Z",
      "processing_status": "in_progress",
      "request_counts": {
        "canceled": 10,
        "errored": 30,
        "expired": 10,
        "processing": 100,
        "succeeded": 50
      },
      "results_url": "https://api.anthropic.com/v1/messages/batches/msgbatch_013Zva2CMHLNnXjNJJKqJ2EF/results",
      "type": "message_batch"
    }

    Solutions

    • AI agents
    • Code modernization
    • Coding
    • Customer support
    • Education
    • Financial services
    • Government
    • Life sciences

    Partners

    • Amazon Bedrock
    • Google Cloud's Vertex AI

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Company

    • Anthropic
    • Careers
    • Economic Futures
    • Research
    • News
    • Responsible Scaling Policy
    • Security and compliance
    • Transparency

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Help and security

    • Availability
    • Status
    • Support
    • Discord

    Terms and policies

    • Privacy policy
    • Responsible disclosure policy
    • Terms of service: Commercial
    • Terms of service: Consumer
    • Usage policy