Loading...
    • Developer Guide
    • API Reference
    • MCP
    • Resources
    • Release Notes
    Search...
    ⌘K

    Using the API

    Features overviewClient SDKsBeta headersErrors
    Messages
    Create a Message
    Count tokens in a Message
    Batches
    Create a Message Batch
    Retrieve a Message Batch
    List Message Batches
    Cancel a Message Batch
    Delete a Message Batch
    Retrieve Message Batch results
    Models
    List Models
    Get a Model
    Beta
    Models
    List Models
    Get a Model
    Messages
    Create a Message
    Count tokens in a Message
    Batches
    Create a Message Batch
    Retrieve a Message Batch
    List Message Batches
    Cancel a Message Batch
    Delete a Message Batch
    Retrieve Message Batch results
    Files
    Upload File
    List Files
    Download File
    Get File Metadata
    Delete File
    Skills
    Create Skill
    List Skills
    Get Skill
    Delete Skill
    Versions
    Create Skill Version
    List Skill Versions
    Get Skill Version
    Delete Skill Version
    Admin
    Organizations
    Get Current Organization
    Invites
    Create Invite
    Get Invite
    List Invites
    Delete Invite
    Users
    Get User
    List Users
    Update User
    Remove User
    Workspaces
    Create Workspace
    Get Workspace
    List Workspaces
    Update Workspace
    Archive Workspace
    Members
    Create Workspace Member
    Get Workspace Member
    List Workspace Members
    Update Workspace Member
    Delete Workspace Member
    API Keys
    Get Api Key
    List Api Keys
    Update Api Key
    Usage Report
    Get Messages Usage Report
    Get Claude Code Usage Report
    Cost Report
    Get Cost Report
    Completions
    Create a Text Completion

    Support & configuration

    Rate limitsService tiersVersionsIP addressesSupported regionsOpenAI SDK compatibility
    Console

    Batches

    Cancel a Message Batch
    messages().batches().cancel(BatchCancelParamsparams = BatchCancelParams.none(), RequestOptionsrequestOptions = RequestOptions.none()) : MessageBatch
    post/v1/messages/batches/{message_batch_id}/cancel
    Create a Message Batch
    messages().batches().create(BatchCreateParamsparams, RequestOptionsrequestOptions = RequestOptions.none()) : MessageBatch
    post/v1/messages/batches
    Delete a Message Batch
    messages().batches().delete(BatchDeleteParamsparams = BatchDeleteParams.none(), RequestOptionsrequestOptions = RequestOptions.none()) : DeletedMessageBatch
    delete/v1/messages/batches/{message_batch_id}
    List Message Batches
    messages().batches().list(BatchListParamsparams = BatchListParams.none(), RequestOptionsrequestOptions = RequestOptions.none()) : BatchListPage
    get/v1/messages/batches
    Retrieve Message Batch results
    messages().batches().resultsStreaming(BatchResultsParamsparams = BatchResultsParams.none(), RequestOptionsrequestOptions = RequestOptions.none()) : MessageBatchIndividualResponse
    get/v1/messages/batches/{message_batch_id}/results
    Retrieve a Message Batch
    messages().batches().retrieve(BatchRetrieveParamsparams = BatchRetrieveParams.none(), RequestOptionsrequestOptions = RequestOptions.none()) : MessageBatch
    get/v1/messages/batches/{message_batch_id}
    ModelsExpand Collapse
    class DeletedMessageBatch:
    id: String

    ID of the Message Batch.

    type: JsonValue; "message_batch_deleted"constant"message_batch_deleted"constant

    Deleted object type.

    For Message Batches, this is always "message_batch_deleted".

    Accepts one of the following:
    MESSAGE_BATCH_DELETED("message_batch_deleted")
    class MessageBatch:
    id: String

    Unique object identifier.

    The format and length of IDs may change over time.

    archivedAt: Optional<LocalDateTime>

    RFC 3339 datetime string representing the time at which the Message Batch was archived and its results became unavailable.

    formatdate-time
    cancelInitiatedAt: Optional<LocalDateTime>

    RFC 3339 datetime string representing the time at which cancellation was initiated for the Message Batch. Specified only if cancellation was initiated.

    formatdate-time
    createdAt: LocalDateTime

    RFC 3339 datetime string representing the time at which the Message Batch was created.

    formatdate-time
    endedAt: Optional<LocalDateTime>

    RFC 3339 datetime string representing the time at which processing for the Message Batch ended. Specified only once processing ends.

    Processing ends when every request in a Message Batch has either succeeded, errored, canceled, or expired.

    formatdate-time
    expiresAt: LocalDateTime

    RFC 3339 datetime string representing the time at which the Message Batch will expire and end processing, which is 24 hours after creation.

    formatdate-time
    processingStatus: ProcessingStatus

    Processing status of the Message Batch.

    Accepts one of the following:
    IN_PROGRESS("in_progress")
    CANCELING("canceling")
    ENDED("ended")
    requestCounts: MessageBatchRequestCounts

    Tallies requests within the Message Batch, categorized by their status.

    Requests start as processing and move to one of the other statuses only once processing of the entire batch ends. The sum of all values always matches the total number of requests in the batch.

    canceled: Long

    Number of requests in the Message Batch that have been canceled.

    This is zero until processing of the entire Message Batch has ended.

    errored: Long

    Number of requests in the Message Batch that encountered an error.

    This is zero until processing of the entire Message Batch has ended.

    expired: Long

    Number of requests in the Message Batch that have expired.

    This is zero until processing of the entire Message Batch has ended.

    processing: Long

    Number of requests in the Message Batch that are processing.

    succeeded: Long

    Number of requests in the Message Batch that have completed successfully.

    This is zero until processing of the entire Message Batch has ended.

    resultsUrl: Optional<String>

    URL to a .jsonl file containing the results of the Message Batch requests. Specified only once processing ends.

    Results in the file are not guaranteed to be in the same order as requests. Use the custom_id field to match results to requests.

    type: JsonValue; "message_batch"constant"message_batch"constant

    Object type.

    For Message Batches, this is always "message_batch".

    Accepts one of the following:
    MESSAGE_BATCH("message_batch")
    class MessageBatchCanceledResult:
    type: JsonValue; "canceled"constant"canceled"constant
    Accepts one of the following:
    CANCELED("canceled")
    class MessageBatchErroredResult:
    error: ErrorResponse
    error: ErrorObject
    Accepts one of the following:
    class InvalidRequestError:
    message: String
    type: JsonValue; "invalid_request_error"constant"invalid_request_error"constant
    Accepts one of the following:
    INVALID_REQUEST_ERROR("invalid_request_error")
    class AuthenticationError:
    message: String
    type: JsonValue; "authentication_error"constant"authentication_error"constant
    Accepts one of the following:
    AUTHENTICATION_ERROR("authentication_error")
    class BillingError:
    message: String
    type: JsonValue; "billing_error"constant"billing_error"constant
    Accepts one of the following:
    BILLING_ERROR("billing_error")
    class PermissionError:
    message: String
    type: JsonValue; "permission_error"constant"permission_error"constant
    Accepts one of the following:
    PERMISSION_ERROR("permission_error")
    class NotFoundError:
    message: String
    type: JsonValue; "not_found_error"constant"not_found_error"constant
    Accepts one of the following:
    NOT_FOUND_ERROR("not_found_error")
    class RateLimitError:
    message: String
    type: JsonValue; "rate_limit_error"constant"rate_limit_error"constant
    Accepts one of the following:
    RATE_LIMIT_ERROR("rate_limit_error")
    class GatewayTimeoutError:
    message: String
    type: JsonValue; "timeout_error"constant"timeout_error"constant
    Accepts one of the following:
    TIMEOUT_ERROR("timeout_error")
    class ApiErrorObject:
    message: String
    type: JsonValue; "api_error"constant"api_error"constant
    Accepts one of the following:
    API_ERROR("api_error")
    class OverloadedError:
    message: String
    type: JsonValue; "overloaded_error"constant"overloaded_error"constant
    Accepts one of the following:
    OVERLOADED_ERROR("overloaded_error")
    requestId: Optional<String>
    type: JsonValue; "error"constant"error"constant
    Accepts one of the following:
    ERROR("error")
    type: JsonValue; "errored"constant"errored"constant
    Accepts one of the following:
    ERRORED("errored")
    class MessageBatchExpiredResult:
    type: JsonValue; "expired"constant"expired"constant
    Accepts one of the following:
    EXPIRED("expired")
    class MessageBatchIndividualResponse:

    This is a single line in the response .jsonl file and does not represent the response as a whole.

    customId: String

    Developer-provided ID created for each request in a Message Batch. Useful for matching results to requests, as results may be given out of request order.

    Must be unique for each request within the Message Batch.

    result: MessageBatchResult

    Processing result for this request.

    Contains a Message output if processing was successful, an error response if processing failed, or the reason why processing was not attempted, such as cancellation or expiration.

    Accepts one of the following:
    class MessageBatchSucceededResult:
    message: Message
    id: String

    Unique object identifier.

    The format and length of IDs may change over time.

    content: List<ContentBlock>

    Content generated by the model.

    This is an array of content blocks, each of which has a type that determines its shape.

    Example:

    [{"type": "text", "text": "Hi, I'm Claude."}]
    

    If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

    For example, if the input messages were:

    [
      {"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
      {"role": "assistant", "content": "The best answer is ("}
    ]
    

    Then the response content might be:

    [{"type": "text", "text": "B)"}]
    
    Accepts one of the following:
    class TextBlock:
    citations: Optional<List<TextCitation>>

    Citations supporting the text block.

    The type of citation returned will depend on the type of document being cited. Citing a PDF results in page_location, plain text results in char_location, and content document results in content_block_location.

    Accepts one of the following:
    class CitationCharLocation:
    citedText: String
    documentIndex: Long
    minimum0
    documentTitle: Optional<String>
    endCharIndex: Long
    fileId: Optional<String>
    startCharIndex: Long
    minimum0
    type: JsonValue; "char_location"constant"char_location"constant
    Accepts one of the following:
    CHAR_LOCATION("char_location")
    class CitationPageLocation:
    citedText: String
    documentIndex: Long
    minimum0
    documentTitle: Optional<String>
    endPageNumber: Long
    fileId: Optional<String>
    startPageNumber: Long
    minimum1
    type: JsonValue; "page_location"constant"page_location"constant
    Accepts one of the following:
    PAGE_LOCATION("page_location")
    class CitationContentBlockLocation:
    citedText: String
    documentIndex: Long
    minimum0
    documentTitle: Optional<String>
    endBlockIndex: Long
    fileId: Optional<String>
    startBlockIndex: Long
    minimum0
    type: JsonValue; "content_block_location"constant"content_block_location"constant
    Accepts one of the following:
    CONTENT_BLOCK_LOCATION("content_block_location")
    class CitationsWebSearchResultLocation:
    citedText: String
    encryptedIndex: String
    title: Optional<String>
    maxLength512
    type: JsonValue; "web_search_result_location"constant"web_search_result_location"constant
    Accepts one of the following:
    WEB_SEARCH_RESULT_LOCATION("web_search_result_location")
    url: String
    class CitationsSearchResultLocation:
    citedText: String
    endBlockIndex: Long
    searchResultIndex: Long
    minimum0
    source: String
    startBlockIndex: Long
    minimum0
    title: Optional<String>
    type: JsonValue; "search_result_location"constant"search_result_location"constant
    Accepts one of the following:
    SEARCH_RESULT_LOCATION("search_result_location")
    text: String
    maxLength5000000
    minLength0
    type: JsonValue; "text"constant"text"constant
    Accepts one of the following:
    TEXT("text")
    class ThinkingBlock:
    signature: String
    thinking: String
    type: JsonValue; "thinking"constant"thinking"constant
    Accepts one of the following:
    THINKING("thinking")
    class RedactedThinkingBlock:
    data: String
    type: JsonValue; "redacted_thinking"constant"redacted_thinking"constant
    Accepts one of the following:
    REDACTED_THINKING("redacted_thinking")
    class ToolUseBlock:
    id: String
    input: Input
    name: String
    minLength1
    type: JsonValue; "tool_use"constant"tool_use"constant
    Accepts one of the following:
    TOOL_USE("tool_use")
    class ServerToolUseBlock:
    id: String
    input: Input
    name: JsonValue; "web_search"constant"web_search"constant
    Accepts one of the following:
    WEB_SEARCH("web_search")
    type: JsonValue; "server_tool_use"constant"server_tool_use"constant
    Accepts one of the following:
    SERVER_TOOL_USE("server_tool_use")
    class WebSearchToolResultBlock:
    content: WebSearchToolResultBlockContent
    Accepts one of the following:
    class WebSearchToolResultError:
    errorCode: ErrorCode
    Accepts one of the following:
    INVALID_TOOL_INPUT("invalid_tool_input")
    UNAVAILABLE("unavailable")
    MAX_USES_EXCEEDED("max_uses_exceeded")
    TOO_MANY_REQUESTS("too_many_requests")
    QUERY_TOO_LONG("query_too_long")
    type: JsonValue; "web_search_tool_result_error"constant"web_search_tool_result_error"constant
    Accepts one of the following:
    WEB_SEARCH_TOOL_RESULT_ERROR("web_search_tool_result_error")
    List<WebSearchResultBlock>
    encryptedContent: String
    pageAge: Optional<String>
    title: String
    type: JsonValue; "web_search_result"constant"web_search_result"constant
    Accepts one of the following:
    WEB_SEARCH_RESULT("web_search_result")
    url: String
    toolUseId: String
    type: JsonValue; "web_search_tool_result"constant"web_search_tool_result"constant
    Accepts one of the following:
    WEB_SEARCH_TOOL_RESULT("web_search_tool_result")
    model: Model

    The model that will complete your prompt.

    See models for additional details and options.

    Accepts one of the following:
    CLAUDE_3_7_SONNET_LATEST("claude-3-7-sonnet-latest")

    High-performance model with early extended thinking

    CLAUDE_3_7_SONNET_20250219("claude-3-7-sonnet-20250219")

    High-performance model with early extended thinking

    CLAUDE_3_5_HAIKU_LATEST("claude-3-5-haiku-latest")

    Fastest and most compact model for near-instant responsiveness

    CLAUDE_3_5_HAIKU_20241022("claude-3-5-haiku-20241022")

    Our fastest model

    CLAUDE_HAIKU_4_5("claude-haiku-4-5")

    Hybrid model, capable of near-instant responses and extended thinking

    CLAUDE_HAIKU_4_5_20251001("claude-haiku-4-5-20251001")

    Hybrid model, capable of near-instant responses and extended thinking

    CLAUDE_SONNET_4_20250514("claude-sonnet-4-20250514")

    High-performance model with extended thinking

    CLAUDE_SONNET_4_0("claude-sonnet-4-0")

    High-performance model with extended thinking

    CLAUDE_4_SONNET_20250514("claude-4-sonnet-20250514")

    High-performance model with extended thinking

    CLAUDE_SONNET_4_5("claude-sonnet-4-5")

    Our best model for real-world agents and coding

    CLAUDE_SONNET_4_5_20250929("claude-sonnet-4-5-20250929")

    Our best model for real-world agents and coding

    CLAUDE_OPUS_4_0("claude-opus-4-0")

    Our most capable model

    CLAUDE_OPUS_4_20250514("claude-opus-4-20250514")

    Our most capable model

    CLAUDE_4_OPUS_20250514("claude-4-opus-20250514")

    Our most capable model

    CLAUDE_OPUS_4_1_20250805("claude-opus-4-1-20250805")

    Our most capable model

    CLAUDE_3_OPUS_LATEST("claude-3-opus-latest")

    Excels at writing and complex tasks

    CLAUDE_3_OPUS_20240229("claude-3-opus-20240229")

    Excels at writing and complex tasks

    CLAUDE_3_HAIKU_20240307("claude-3-haiku-20240307")

    Our previous most fast and cost-effective

    role: JsonValue; "assistant"constant"assistant"constant

    Conversational role of the generated message.

    This will always be "assistant".

    Accepts one of the following:
    ASSISTANT("assistant")
    stopReason: Optional<StopReason>

    The reason that we stopped.

    This may be one the following values:

    • "end_turn": the model reached a natural stopping point
    • "max_tokens": we exceeded the requested max_tokens or the model's maximum
    • "stop_sequence": one of your provided custom stop_sequences was generated
    • "tool_use": the model invoked one or more tools
    • "pause_turn": we paused a long-running turn. You may provide the response back as-is in a subsequent request to let the model continue.
    • "refusal": when streaming classifiers intervene to handle potential policy violations

    In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.

    Accepts one of the following:
    END_TURN("end_turn")
    MAX_TOKENS("max_tokens")
    STOP_SEQUENCE("stop_sequence")
    TOOL_USE("tool_use")
    PAUSE_TURN("pause_turn")
    REFUSAL("refusal")
    stopSequence: Optional<String>

    Which custom stop sequence was generated, if any.

    This value will be a non-null string if one of your custom stop sequences was generated.

    type: JsonValue; "message"constant"message"constant

    Object type.

    For Messages, this is always "message".

    Accepts one of the following:
    MESSAGE("message")
    usage: Usage

    Billing and rate-limit usage.

    Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.

    Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.

    For example, output_tokens will be non-zero, even for an empty string response from Claude.

    Total input tokens in a request is the summation of input_tokens, cache_creation_input_tokens, and cache_read_input_tokens.

    cacheCreation: Optional<CacheCreation>

    Breakdown of cached tokens by TTL

    ephemeral1hInputTokens: Long

    The number of input tokens used to create the 1 hour cache entry.

    minimum0
    ephemeral5mInputTokens: Long

    The number of input tokens used to create the 5 minute cache entry.

    minimum0
    cacheCreationInputTokens: Optional<Long>

    The number of input tokens used to create the cache entry.

    minimum0
    cacheReadInputTokens: Optional<Long>

    The number of input tokens read from the cache.

    minimum0
    inputTokens: Long

    The number of input tokens which were used.

    minimum0
    outputTokens: Long

    The number of output tokens which were used.

    minimum0
    serverToolUse: Optional<ServerToolUsage>

    The number of server tool requests.

    webSearchRequests: Long

    The number of web search tool requests.

    minimum0
    serviceTier: Optional<ServiceTier>

    If the request used the priority, standard, or batch tier.

    Accepts one of the following:
    STANDARD("standard")
    PRIORITY("priority")
    BATCH("batch")
    type: JsonValue; "succeeded"constant"succeeded"constant
    Accepts one of the following:
    SUCCEEDED("succeeded")
    class MessageBatchErroredResult:
    error: ErrorResponse
    error: ErrorObject
    Accepts one of the following:
    class InvalidRequestError:
    message: String
    type: JsonValue; "invalid_request_error"constant"invalid_request_error"constant
    Accepts one of the following:
    INVALID_REQUEST_ERROR("invalid_request_error")
    class AuthenticationError:
    message: String
    type: JsonValue; "authentication_error"constant"authentication_error"constant
    Accepts one of the following:
    AUTHENTICATION_ERROR("authentication_error")
    class BillingError:
    message: String
    type: JsonValue; "billing_error"constant"billing_error"constant
    Accepts one of the following:
    BILLING_ERROR("billing_error")
    class PermissionError:
    message: String
    type: JsonValue; "permission_error"constant"permission_error"constant
    Accepts one of the following:
    PERMISSION_ERROR("permission_error")
    class NotFoundError:
    message: String
    type: JsonValue; "not_found_error"constant"not_found_error"constant
    Accepts one of the following:
    NOT_FOUND_ERROR("not_found_error")
    class RateLimitError:
    message: String
    type: JsonValue; "rate_limit_error"constant"rate_limit_error"constant
    Accepts one of the following:
    RATE_LIMIT_ERROR("rate_limit_error")
    class GatewayTimeoutError:
    message: String
    type: JsonValue; "timeout_error"constant"timeout_error"constant
    Accepts one of the following:
    TIMEOUT_ERROR("timeout_error")
    class ApiErrorObject:
    message: String
    type: JsonValue; "api_error"constant"api_error"constant
    Accepts one of the following:
    API_ERROR("api_error")
    class OverloadedError:
    message: String
    type: JsonValue; "overloaded_error"constant"overloaded_error"constant
    Accepts one of the following:
    OVERLOADED_ERROR("overloaded_error")
    requestId: Optional<String>
    type: JsonValue; "error"constant"error"constant
    Accepts one of the following:
    ERROR("error")
    type: JsonValue; "errored"constant"errored"constant
    Accepts one of the following:
    ERRORED("errored")
    class MessageBatchCanceledResult:
    type: JsonValue; "canceled"constant"canceled"constant
    Accepts one of the following:
    CANCELED("canceled")
    class MessageBatchExpiredResult:
    type: JsonValue; "expired"constant"expired"constant
    Accepts one of the following:
    EXPIRED("expired")
    class MessageBatchRequestCounts:
    canceled: Long

    Number of requests in the Message Batch that have been canceled.

    This is zero until processing of the entire Message Batch has ended.

    errored: Long

    Number of requests in the Message Batch that encountered an error.

    This is zero until processing of the entire Message Batch has ended.

    expired: Long

    Number of requests in the Message Batch that have expired.

    This is zero until processing of the entire Message Batch has ended.

    processing: Long

    Number of requests in the Message Batch that are processing.

    succeeded: Long

    Number of requests in the Message Batch that have completed successfully.

    This is zero until processing of the entire Message Batch has ended.

    class MessageBatchResult: A class that can be one of several variants.union

    Processing result for this request.

    Contains a Message output if processing was successful, an error response if processing failed, or the reason why processing was not attempted, such as cancellation or expiration.

    class MessageBatchSucceededResult:
    message: Message
    id: String

    Unique object identifier.

    The format and length of IDs may change over time.

    content: List<ContentBlock>

    Content generated by the model.

    This is an array of content blocks, each of which has a type that determines its shape.

    Example:

    [{"type": "text", "text": "Hi, I'm Claude."}]
    

    If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

    For example, if the input messages were:

    [
      {"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
      {"role": "assistant", "content": "The best answer is ("}
    ]
    

    Then the response content might be:

    [{"type": "text", "text": "B)"}]
    
    Accepts one of the following:
    class TextBlock:
    citations: Optional<List<TextCitation>>

    Citations supporting the text block.

    The type of citation returned will depend on the type of document being cited. Citing a PDF results in page_location, plain text results in char_location, and content document results in content_block_location.

    Accepts one of the following:
    class CitationCharLocation:
    citedText: String
    documentIndex: Long
    minimum0
    documentTitle: Optional<String>
    endCharIndex: Long
    fileId: Optional<String>
    startCharIndex: Long
    minimum0
    type: JsonValue; "char_location"constant"char_location"constant
    Accepts one of the following:
    CHAR_LOCATION("char_location")
    class CitationPageLocation:
    citedText: String
    documentIndex: Long
    minimum0
    documentTitle: Optional<String>
    endPageNumber: Long
    fileId: Optional<String>
    startPageNumber: Long
    minimum1
    type: JsonValue; "page_location"constant"page_location"constant
    Accepts one of the following:
    PAGE_LOCATION("page_location")
    class CitationContentBlockLocation:
    citedText: String
    documentIndex: Long
    minimum0
    documentTitle: Optional<String>
    endBlockIndex: Long
    fileId: Optional<String>
    startBlockIndex: Long
    minimum0
    type: JsonValue; "content_block_location"constant"content_block_location"constant
    Accepts one of the following:
    CONTENT_BLOCK_LOCATION("content_block_location")
    class CitationsWebSearchResultLocation:
    citedText: String
    encryptedIndex: String
    title: Optional<String>
    maxLength512
    type: JsonValue; "web_search_result_location"constant"web_search_result_location"constant
    Accepts one of the following:
    WEB_SEARCH_RESULT_LOCATION("web_search_result_location")
    url: String
    class CitationsSearchResultLocation:
    citedText: String
    endBlockIndex: Long
    searchResultIndex: Long
    minimum0
    source: String
    startBlockIndex: Long
    minimum0
    title: Optional<String>
    type: JsonValue; "search_result_location"constant"search_result_location"constant
    Accepts one of the following:
    SEARCH_RESULT_LOCATION("search_result_location")
    text: String
    maxLength5000000
    minLength0
    type: JsonValue; "text"constant"text"constant
    Accepts one of the following:
    TEXT("text")
    class ThinkingBlock:
    signature: String
    thinking: String
    type: JsonValue; "thinking"constant"thinking"constant
    Accepts one of the following:
    THINKING("thinking")
    class RedactedThinkingBlock:
    data: String
    type: JsonValue; "redacted_thinking"constant"redacted_thinking"constant
    Accepts one of the following:
    REDACTED_THINKING("redacted_thinking")
    class ToolUseBlock:
    id: String
    input: Input
    name: String
    minLength1
    type: JsonValue; "tool_use"constant"tool_use"constant
    Accepts one of the following:
    TOOL_USE("tool_use")
    class ServerToolUseBlock:
    id: String
    input: Input
    name: JsonValue; "web_search"constant"web_search"constant
    Accepts one of the following:
    WEB_SEARCH("web_search")
    type: JsonValue; "server_tool_use"constant"server_tool_use"constant
    Accepts one of the following:
    SERVER_TOOL_USE("server_tool_use")
    class WebSearchToolResultBlock:
    content: WebSearchToolResultBlockContent
    Accepts one of the following:
    class WebSearchToolResultError:
    errorCode: ErrorCode
    Accepts one of the following:
    INVALID_TOOL_INPUT("invalid_tool_input")
    UNAVAILABLE("unavailable")
    MAX_USES_EXCEEDED("max_uses_exceeded")
    TOO_MANY_REQUESTS("too_many_requests")
    QUERY_TOO_LONG("query_too_long")
    type: JsonValue; "web_search_tool_result_error"constant"web_search_tool_result_error"constant
    Accepts one of the following:
    WEB_SEARCH_TOOL_RESULT_ERROR("web_search_tool_result_error")
    List<WebSearchResultBlock>
    encryptedContent: String
    pageAge: Optional<String>
    title: String
    type: JsonValue; "web_search_result"constant"web_search_result"constant
    Accepts one of the following:
    WEB_SEARCH_RESULT("web_search_result")
    url: String
    toolUseId: String
    type: JsonValue; "web_search_tool_result"constant"web_search_tool_result"constant
    Accepts one of the following:
    WEB_SEARCH_TOOL_RESULT("web_search_tool_result")
    model: Model

    The model that will complete your prompt.

    See models for additional details and options.

    Accepts one of the following:
    CLAUDE_3_7_SONNET_LATEST("claude-3-7-sonnet-latest")

    High-performance model with early extended thinking

    CLAUDE_3_7_SONNET_20250219("claude-3-7-sonnet-20250219")

    High-performance model with early extended thinking

    CLAUDE_3_5_HAIKU_LATEST("claude-3-5-haiku-latest")

    Fastest and most compact model for near-instant responsiveness

    CLAUDE_3_5_HAIKU_20241022("claude-3-5-haiku-20241022")

    Our fastest model

    CLAUDE_HAIKU_4_5("claude-haiku-4-5")

    Hybrid model, capable of near-instant responses and extended thinking

    CLAUDE_HAIKU_4_5_20251001("claude-haiku-4-5-20251001")

    Hybrid model, capable of near-instant responses and extended thinking

    CLAUDE_SONNET_4_20250514("claude-sonnet-4-20250514")

    High-performance model with extended thinking

    CLAUDE_SONNET_4_0("claude-sonnet-4-0")

    High-performance model with extended thinking

    CLAUDE_4_SONNET_20250514("claude-4-sonnet-20250514")

    High-performance model with extended thinking

    CLAUDE_SONNET_4_5("claude-sonnet-4-5")

    Our best model for real-world agents and coding

    CLAUDE_SONNET_4_5_20250929("claude-sonnet-4-5-20250929")

    Our best model for real-world agents and coding

    CLAUDE_OPUS_4_0("claude-opus-4-0")

    Our most capable model

    CLAUDE_OPUS_4_20250514("claude-opus-4-20250514")

    Our most capable model

    CLAUDE_4_OPUS_20250514("claude-4-opus-20250514")

    Our most capable model

    CLAUDE_OPUS_4_1_20250805("claude-opus-4-1-20250805")

    Our most capable model

    CLAUDE_3_OPUS_LATEST("claude-3-opus-latest")

    Excels at writing and complex tasks

    CLAUDE_3_OPUS_20240229("claude-3-opus-20240229")

    Excels at writing and complex tasks

    CLAUDE_3_HAIKU_20240307("claude-3-haiku-20240307")

    Our previous most fast and cost-effective

    role: JsonValue; "assistant"constant"assistant"constant

    Conversational role of the generated message.

    This will always be "assistant".

    Accepts one of the following:
    ASSISTANT("assistant")
    stopReason: Optional<StopReason>

    The reason that we stopped.

    This may be one the following values:

    • "end_turn": the model reached a natural stopping point
    • "max_tokens": we exceeded the requested max_tokens or the model's maximum
    • "stop_sequence": one of your provided custom stop_sequences was generated
    • "tool_use": the model invoked one or more tools
    • "pause_turn": we paused a long-running turn. You may provide the response back as-is in a subsequent request to let the model continue.
    • "refusal": when streaming classifiers intervene to handle potential policy violations

    In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.

    Accepts one of the following:
    END_TURN("end_turn")
    MAX_TOKENS("max_tokens")
    STOP_SEQUENCE("stop_sequence")
    TOOL_USE("tool_use")
    PAUSE_TURN("pause_turn")
    REFUSAL("refusal")
    stopSequence: Optional<String>

    Which custom stop sequence was generated, if any.

    This value will be a non-null string if one of your custom stop sequences was generated.

    type: JsonValue; "message"constant"message"constant

    Object type.

    For Messages, this is always "message".

    Accepts one of the following:
    MESSAGE("message")
    usage: Usage

    Billing and rate-limit usage.

    Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.

    Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.

    For example, output_tokens will be non-zero, even for an empty string response from Claude.

    Total input tokens in a request is the summation of input_tokens, cache_creation_input_tokens, and cache_read_input_tokens.

    cacheCreation: Optional<CacheCreation>

    Breakdown of cached tokens by TTL

    ephemeral1hInputTokens: Long

    The number of input tokens used to create the 1 hour cache entry.

    minimum0
    ephemeral5mInputTokens: Long

    The number of input tokens used to create the 5 minute cache entry.

    minimum0
    cacheCreationInputTokens: Optional<Long>

    The number of input tokens used to create the cache entry.

    minimum0
    cacheReadInputTokens: Optional<Long>

    The number of input tokens read from the cache.

    minimum0
    inputTokens: Long

    The number of input tokens which were used.

    minimum0
    outputTokens: Long

    The number of output tokens which were used.

    minimum0
    serverToolUse: Optional<ServerToolUsage>

    The number of server tool requests.

    webSearchRequests: Long

    The number of web search tool requests.

    minimum0
    serviceTier: Optional<ServiceTier>

    If the request used the priority, standard, or batch tier.

    Accepts one of the following:
    STANDARD("standard")
    PRIORITY("priority")
    BATCH("batch")
    type: JsonValue; "succeeded"constant"succeeded"constant
    Accepts one of the following:
    SUCCEEDED("succeeded")
    class MessageBatchErroredResult:
    error: ErrorResponse
    error: ErrorObject
    Accepts one of the following:
    class InvalidRequestError:
    message: String
    type: JsonValue; "invalid_request_error"constant"invalid_request_error"constant
    Accepts one of the following:
    INVALID_REQUEST_ERROR("invalid_request_error")
    class AuthenticationError:
    message: String
    type: JsonValue; "authentication_error"constant"authentication_error"constant
    Accepts one of the following:
    AUTHENTICATION_ERROR("authentication_error")
    class BillingError:
    message: String
    type: JsonValue; "billing_error"constant"billing_error"constant
    Accepts one of the following:
    BILLING_ERROR("billing_error")
    class PermissionError:
    message: String
    type: JsonValue; "permission_error"constant"permission_error"constant
    Accepts one of the following:
    PERMISSION_ERROR("permission_error")
    class NotFoundError:
    message: String
    type: JsonValue; "not_found_error"constant"not_found_error"constant
    Accepts one of the following:
    NOT_FOUND_ERROR("not_found_error")
    class RateLimitError:
    message: String
    type: JsonValue; "rate_limit_error"constant"rate_limit_error"constant
    Accepts one of the following:
    RATE_LIMIT_ERROR("rate_limit_error")
    class GatewayTimeoutError:
    message: String
    type: JsonValue; "timeout_error"constant"timeout_error"constant
    Accepts one of the following:
    TIMEOUT_ERROR("timeout_error")
    class ApiErrorObject:
    message: String
    type: JsonValue; "api_error"constant"api_error"constant
    Accepts one of the following:
    API_ERROR("api_error")
    class OverloadedError:
    message: String
    type: JsonValue; "overloaded_error"constant"overloaded_error"constant
    Accepts one of the following:
    OVERLOADED_ERROR("overloaded_error")
    requestId: Optional<String>
    type: JsonValue; "error"constant"error"constant
    Accepts one of the following:
    ERROR("error")
    type: JsonValue; "errored"constant"errored"constant
    Accepts one of the following:
    ERRORED("errored")
    class MessageBatchCanceledResult:
    type: JsonValue; "canceled"constant"canceled"constant
    Accepts one of the following:
    CANCELED("canceled")
    class MessageBatchExpiredResult:
    type: JsonValue; "expired"constant"expired"constant
    Accepts one of the following:
    EXPIRED("expired")
    class MessageBatchSucceededResult:
    message: Message
    id: String

    Unique object identifier.

    The format and length of IDs may change over time.

    content: List<ContentBlock>

    Content generated by the model.

    This is an array of content blocks, each of which has a type that determines its shape.

    Example:

    [{"type": "text", "text": "Hi, I'm Claude."}]
    

    If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

    For example, if the input messages were:

    [
      {"role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"},
      {"role": "assistant", "content": "The best answer is ("}
    ]
    

    Then the response content might be:

    [{"type": "text", "text": "B)"}]
    
    Accepts one of the following:
    class TextBlock:
    citations: Optional<List<TextCitation>>

    Citations supporting the text block.

    The type of citation returned will depend on the type of document being cited. Citing a PDF results in page_location, plain text results in char_location, and content document results in content_block_location.

    Accepts one of the following:
    class CitationCharLocation:
    citedText: String
    documentIndex: Long
    minimum0
    documentTitle: Optional<String>
    endCharIndex: Long
    fileId: Optional<String>
    startCharIndex: Long
    minimum0
    type: JsonValue; "char_location"constant"char_location"constant
    Accepts one of the following:
    CHAR_LOCATION("char_location")
    class CitationPageLocation:
    citedText: String
    documentIndex: Long
    minimum0
    documentTitle: Optional<String>
    endPageNumber: Long
    fileId: Optional<String>
    startPageNumber: Long
    minimum1
    type: JsonValue; "page_location"constant"page_location"constant
    Accepts one of the following:
    PAGE_LOCATION("page_location")
    class CitationContentBlockLocation:
    citedText: String
    documentIndex: Long
    minimum0
    documentTitle: Optional<String>
    endBlockIndex: Long
    fileId: Optional<String>
    startBlockIndex: Long
    minimum0
    type: JsonValue; "content_block_location"constant"content_block_location"constant
    Accepts one of the following:
    CONTENT_BLOCK_LOCATION("content_block_location")
    class CitationsWebSearchResultLocation:
    citedText: String
    encryptedIndex: String
    title: Optional<String>
    maxLength512
    type: JsonValue; "web_search_result_location"constant"web_search_result_location"constant
    Accepts one of the following:
    WEB_SEARCH_RESULT_LOCATION("web_search_result_location")
    url: String
    class CitationsSearchResultLocation:
    citedText: String
    endBlockIndex: Long
    searchResultIndex: Long
    minimum0
    source: String
    startBlockIndex: Long
    minimum0
    title: Optional<String>
    type: JsonValue; "search_result_location"constant"search_result_location"constant
    Accepts one of the following:
    SEARCH_RESULT_LOCATION("search_result_location")
    text: String
    maxLength5000000
    minLength0
    type: JsonValue; "text"constant"text"constant
    Accepts one of the following:
    TEXT("text")
    class ThinkingBlock:
    signature: String
    thinking: String
    type: JsonValue; "thinking"constant"thinking"constant
    Accepts one of the following:
    THINKING("thinking")
    class RedactedThinkingBlock:
    data: String
    type: JsonValue; "redacted_thinking"constant"redacted_thinking"constant
    Accepts one of the following:
    REDACTED_THINKING("redacted_thinking")
    class ToolUseBlock:
    id: String
    input: Input
    name: String
    minLength1
    type: JsonValue; "tool_use"constant"tool_use"constant
    Accepts one of the following:
    TOOL_USE("tool_use")
    class ServerToolUseBlock:
    id: String
    input: Input
    name: JsonValue; "web_search"constant"web_search"constant
    Accepts one of the following:
    WEB_SEARCH("web_search")
    type: JsonValue; "server_tool_use"constant"server_tool_use"constant
    Accepts one of the following:
    SERVER_TOOL_USE("server_tool_use")
    class WebSearchToolResultBlock:
    content: WebSearchToolResultBlockContent
    Accepts one of the following:
    class WebSearchToolResultError:
    errorCode: ErrorCode
    Accepts one of the following:
    INVALID_TOOL_INPUT("invalid_tool_input")
    UNAVAILABLE("unavailable")
    MAX_USES_EXCEEDED("max_uses_exceeded")
    TOO_MANY_REQUESTS("too_many_requests")
    QUERY_TOO_LONG("query_too_long")
    type: JsonValue; "web_search_tool_result_error"constant"web_search_tool_result_error"constant
    Accepts one of the following:
    WEB_SEARCH_TOOL_RESULT_ERROR("web_search_tool_result_error")
    List<WebSearchResultBlock>
    encryptedContent: String
    pageAge: Optional<String>
    title: String
    type: JsonValue; "web_search_result"constant"web_search_result"constant
    Accepts one of the following:
    WEB_SEARCH_RESULT("web_search_result")
    url: String
    toolUseId: String
    type: JsonValue; "web_search_tool_result"constant"web_search_tool_result"constant
    Accepts one of the following:
    WEB_SEARCH_TOOL_RESULT("web_search_tool_result")
    model: Model

    The model that will complete your prompt.

    See models for additional details and options.

    Accepts one of the following:
    CLAUDE_3_7_SONNET_LATEST("claude-3-7-sonnet-latest")

    High-performance model with early extended thinking

    CLAUDE_3_7_SONNET_20250219("claude-3-7-sonnet-20250219")

    High-performance model with early extended thinking

    CLAUDE_3_5_HAIKU_LATEST("claude-3-5-haiku-latest")

    Fastest and most compact model for near-instant responsiveness

    CLAUDE_3_5_HAIKU_20241022("claude-3-5-haiku-20241022")

    Our fastest model

    CLAUDE_HAIKU_4_5("claude-haiku-4-5")

    Hybrid model, capable of near-instant responses and extended thinking

    CLAUDE_HAIKU_4_5_20251001("claude-haiku-4-5-20251001")

    Hybrid model, capable of near-instant responses and extended thinking

    CLAUDE_SONNET_4_20250514("claude-sonnet-4-20250514")

    High-performance model with extended thinking

    CLAUDE_SONNET_4_0("claude-sonnet-4-0")

    High-performance model with extended thinking

    CLAUDE_4_SONNET_20250514("claude-4-sonnet-20250514")

    High-performance model with extended thinking

    CLAUDE_SONNET_4_5("claude-sonnet-4-5")

    Our best model for real-world agents and coding

    CLAUDE_SONNET_4_5_20250929("claude-sonnet-4-5-20250929")

    Our best model for real-world agents and coding

    CLAUDE_OPUS_4_0("claude-opus-4-0")

    Our most capable model

    CLAUDE_OPUS_4_20250514("claude-opus-4-20250514")

    Our most capable model

    CLAUDE_4_OPUS_20250514("claude-4-opus-20250514")

    Our most capable model

    CLAUDE_OPUS_4_1_20250805("claude-opus-4-1-20250805")

    Our most capable model

    CLAUDE_3_OPUS_LATEST("claude-3-opus-latest")

    Excels at writing and complex tasks

    CLAUDE_3_OPUS_20240229("claude-3-opus-20240229")

    Excels at writing and complex tasks

    CLAUDE_3_HAIKU_20240307("claude-3-haiku-20240307")

    Our previous most fast and cost-effective

    role: JsonValue; "assistant"constant"assistant"constant

    Conversational role of the generated message.

    This will always be "assistant".

    Accepts one of the following:
    ASSISTANT("assistant")
    stopReason: Optional<StopReason>

    The reason that we stopped.

    This may be one the following values:

    • "end_turn": the model reached a natural stopping point
    • "max_tokens": we exceeded the requested max_tokens or the model's maximum
    • "stop_sequence": one of your provided custom stop_sequences was generated
    • "tool_use": the model invoked one or more tools
    • "pause_turn": we paused a long-running turn. You may provide the response back as-is in a subsequent request to let the model continue.
    • "refusal": when streaming classifiers intervene to handle potential policy violations

    In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.

    Accepts one of the following:
    END_TURN("end_turn")
    MAX_TOKENS("max_tokens")
    STOP_SEQUENCE("stop_sequence")
    TOOL_USE("tool_use")
    PAUSE_TURN("pause_turn")
    REFUSAL("refusal")
    stopSequence: Optional<String>

    Which custom stop sequence was generated, if any.

    This value will be a non-null string if one of your custom stop sequences was generated.

    type: JsonValue; "message"constant"message"constant

    Object type.

    For Messages, this is always "message".

    Accepts one of the following:
    MESSAGE("message")
    usage: Usage

    Billing and rate-limit usage.

    Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.

    Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.

    For example, output_tokens will be non-zero, even for an empty string response from Claude.

    Total input tokens in a request is the summation of input_tokens, cache_creation_input_tokens, and cache_read_input_tokens.

    cacheCreation: Optional<CacheCreation>

    Breakdown of cached tokens by TTL

    ephemeral1hInputTokens: Long

    The number of input tokens used to create the 1 hour cache entry.

    minimum0
    ephemeral5mInputTokens: Long

    The number of input tokens used to create the 5 minute cache entry.

    minimum0
    cacheCreationInputTokens: Optional<Long>

    The number of input tokens used to create the cache entry.

    minimum0
    cacheReadInputTokens: Optional<Long>

    The number of input tokens read from the cache.

    minimum0
    inputTokens: Long

    The number of input tokens which were used.

    minimum0
    outputTokens: Long

    The number of output tokens which were used.

    minimum0
    serverToolUse: Optional<ServerToolUsage>

    The number of server tool requests.

    webSearchRequests: Long

    The number of web search tool requests.

    minimum0
    serviceTier: Optional<ServiceTier>

    If the request used the priority, standard, or batch tier.

    Accepts one of the following:
    STANDARD("standard")
    PRIORITY("priority")
    BATCH("batch")
    type: JsonValue; "succeeded"constant"succeeded"constant
    Accepts one of the following:
    SUCCEEDED("succeeded")
    © 2025 ANTHROPIC PBC

    Products

    • Claude
    • Claude Code
    • Max plan
    • Team plan
    • Enterprise plan
    • Download app
    • Pricing
    • Log in

    Features

    • Claude and Slack
    • Claude in Excel

    Models

    • Opus
    • Sonnet
    • Haiku

    Solutions

    • AI agents
    • Code modernization
    • Coding
    • Customer support
    • Education
    • Financial services
    • Government
    • Life sciences

    Claude Developer Platform

    • Overview
    • Developer docs
    • Pricing
    • Amazon Bedrock
    • Google Cloud’s Vertex AI
    • Console login

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Company

    • Anthropic
    • Careers
    • Economic Futures
    • Research
    • News
    • Responsible Scaling Policy
    • Security and compliance
    • Transparency

    Help and security

    • Availability
    • Status
    • Support center

    Terms and policies

    • Privacy policy
    • Responsible disclosure policy
    • Terms of service: Commercial
    • Terms of service: Consumer
    • Usage policy

    Products

    • Claude
    • Claude Code
    • Max plan
    • Team plan
    • Enterprise plan
    • Download app
    • Pricing
    • Log in

    Features

    • Claude and Slack
    • Claude in Excel

    Models

    • Opus
    • Sonnet
    • Haiku

    Solutions

    • AI agents
    • Code modernization
    • Coding
    • Customer support
    • Education
    • Financial services
    • Government
    • Life sciences

    Claude Developer Platform

    • Overview
    • Developer docs
    • Pricing
    • Amazon Bedrock
    • Google Cloud’s Vertex AI
    • Console login

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Company

    • Anthropic
    • Careers
    • Economic Futures
    • Research
    • News
    • Responsible Scaling Policy
    • Security and compliance
    • Transparency

    Help and security

    • Availability
    • Status
    • Support center

    Terms and policies

    • Privacy policy
    • Responsible disclosure policy
    • Terms of service: Commercial
    • Terms of service: Consumer
    • Usage policy
    © 2025 ANTHROPIC PBC