Loading...
    • Developer Guide
    • API Reference
    • MCP
    • Resources
    • Release Notes
    Search...
    ⌘K

    Using the API

    OverviewClient SDKsBeta headersErrors
    Messages
    Create a Message
    Count tokens in a Message
    Batches
    Create a Message Batch
    Retrieve a Message Batch
    List Message Batches
    Cancel a Message Batch
    Delete a Message Batch
    Retrieve Message Batch results
    Models
    List Models
    Get a Model
    Beta
    Models
    List Models
    Get a Model
    Messages
    Create a Message
    Count tokens in a Message
    Batches
    Create a Message Batch
    Retrieve a Message Batch
    List Message Batches
    Cancel a Message Batch
    Delete a Message Batch
    Retrieve Message Batch results
    Files
    Upload File
    List Files
    Download File
    Get File Metadata
    Delete File
    Skills
    Create Skill
    List Skills
    Get Skill
    Delete Skill
    Versions
    Create Skill Version
    List Skill Versions
    Get Skill Version
    Delete Skill Version
    Admin
    Organization
    Get Organization Me
    Invites
    Create Invite
    List Invites
    Get Invite
    Delete Invite
    Users
    Get User
    Update User
    Remove User
    List Users
    Workspaces
    Get Workspace
    List Workspaces
    Create Workspace
    Update Workspace
    Archive Workspace
    Members
    Get Workspace Member
    Create Workspace Member
    Delete Workspace Member
    List Workspace Members
    Update Workspace Member
    API Keys
    Get Api Key
    Update Api Key
    List Api Keys
    Usage And Cost
    Usage Report
    Get Messages Usage Report
    Get Claude Code Usage Report
    Cost Report
    Get Cost Report
    Completions
    Create a Text Completion

    Support & configuration

    Rate limitsService tiersVersionsIP addressesSupported regionsOpenAI SDK compatibility
    Console
    Create a Text Completion
    post/v1/complete

    [Legacy] Create a Text Completion.

    The Text Completions API is a legacy API. We recommend using the Messages API going forward.

    Future models and features will not be compatible with Text Completions. See our migration guide for guidance in migrating from Text Completions to Messages.

    Header ParametersExpand Collapse
    "anthropic-beta": optional array of AnthropicBeta

    Optional header to specify the beta version(s) you want to use.

    Accepts one of the following:
    UnionMember0 = string
    UnionMember1 = "message-batches-2024-09-24" or "prompt-caching-2024-07-31" or "computer-use-2024-10-22" or 15 more
    Accepts one of the following:
    "message-batches-2024-09-24"
    "prompt-caching-2024-07-31"
    "computer-use-2024-10-22"
    "computer-use-2025-01-24"
    "pdfs-2024-09-25"
    "token-counting-2024-11-01"
    "token-efficient-tools-2025-02-19"
    "output-128k-2025-02-19"
    "files-api-2025-04-14"
    "mcp-client-2025-04-04"
    "dev-full-thinking-2025-05-14"
    "interleaved-thinking-2025-05-14"
    "code-execution-2025-05-22"
    "extended-cache-ttl-2025-04-11"
    "context-1m-2025-08-07"
    "context-management-2025-06-27"
    "model-context-window-exceeded-2025-08-26"
    "skills-2025-10-02"
    Body ParametersExpand Collapse
    max_tokens_to_sample: number

    The maximum number of tokens to generate before stopping.

    Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.

    minimum1
    model: Model

    The model that will complete your prompt.

    See models for additional details and options.

    Accepts one of the following:
    UnionMember0 = "claude-3-7-sonnet-latest" or "claude-3-7-sonnet-20250219" or "claude-3-5-haiku-latest" or 15 more

    The model that will complete your prompt.

    See models for additional details and options.

    Accepts one of the following:
    "claude-3-7-sonnet-latest"

    High-performance model with early extended thinking

    "claude-3-7-sonnet-20250219"

    High-performance model with early extended thinking

    "claude-3-5-haiku-latest"

    Fastest and most compact model for near-instant responsiveness

    "claude-3-5-haiku-20241022"

    Our fastest model

    "claude-haiku-4-5"

    Hybrid model, capable of near-instant responses and extended thinking

    "claude-haiku-4-5-20251001"

    Hybrid model, capable of near-instant responses and extended thinking

    "claude-sonnet-4-20250514"

    High-performance model with extended thinking

    "claude-sonnet-4-0"

    High-performance model with extended thinking

    "claude-4-sonnet-20250514"

    High-performance model with extended thinking

    "claude-sonnet-4-5"

    Our best model for real-world agents and coding

    "claude-sonnet-4-5-20250929"

    Our best model for real-world agents and coding

    "claude-opus-4-0"

    Our most capable model

    "claude-opus-4-20250514"

    Our most capable model

    "claude-4-opus-20250514"

    Our most capable model

    "claude-opus-4-1-20250805"

    Our most capable model

    "claude-3-opus-latest"

    Excels at writing and complex tasks

    "claude-3-opus-20240229"

    Excels at writing and complex tasks

    "claude-3-haiku-20240307"

    Our previous most fast and cost-effective

    UnionMember1 = string
    prompt: string

    The prompt that you want Claude to complete.

    For proper response generation you will need to format your prompt using alternating `

    Human:and

    Assistant:` conversational turns. For example:

    "
    
    Human: {userQuestion}
    
    Assistant:"
    

    See prompt validation and our guide to prompt design for more details.

    minLength1
    metadata: optional Metadata { user_id }

    An object describing metadata about the request.

    user_id: optional string

    An external identifier for the user who is associated with the request.

    This should be a uuid, hash value, or other opaque identifier. Anthropic may use this id to help detect abuse. Do not include any identifying information such as name, email address, or phone number.

    maxLength256
    stop_sequences: optional array of string

    Sequences that will cause the model to stop generating.

    Our models stop on `"

    Human:"`, and may include additional built-in stop sequences in the future. By providing the stop_sequences parameter, you may include additional strings that will cause the model to stop generating.

    stream: optional boolean

    Whether to incrementally stream the response using server-sent events.

    See streaming for details.

    temperature: optional number

    Amount of randomness injected into the response.

    Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks.

    Note that even with temperature of 0.0, the results will not be fully deterministic.

    maximum1
    minimum0
    top_k: optional number

    Only sample from the top K options for each subsequent token.

    Used to remove "long tail" low probability responses. Learn more technical details here.

    Recommended for advanced use cases only. You usually only need to use temperature.

    minimum0
    top_p: optional number

    Use nucleus sampling.

    In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both.

    Recommended for advanced use cases only. You usually only need to use temperature.

    maximum1
    minimum0
    ReturnsExpand Collapse
    Completion = object { id, completion, model, 2 more }
    id: string

    Unique object identifier.

    The format and length of IDs may change over time.

    completion: string

    The resulting completion up to and excluding the stop sequences.

    model: Model

    The model that will complete your prompt.

    See models for additional details and options.

    Accepts one of the following:
    UnionMember0 = "claude-3-7-sonnet-latest" or "claude-3-7-sonnet-20250219" or "claude-3-5-haiku-latest" or 15 more

    The model that will complete your prompt.

    See models for additional details and options.

    Accepts one of the following:
    "claude-3-7-sonnet-latest"

    High-performance model with early extended thinking

    "claude-3-7-sonnet-20250219"

    High-performance model with early extended thinking

    "claude-3-5-haiku-latest"

    Fastest and most compact model for near-instant responsiveness

    "claude-3-5-haiku-20241022"

    Our fastest model

    "claude-haiku-4-5"

    Hybrid model, capable of near-instant responses and extended thinking

    "claude-haiku-4-5-20251001"

    Hybrid model, capable of near-instant responses and extended thinking

    "claude-sonnet-4-20250514"

    High-performance model with extended thinking

    "claude-sonnet-4-0"

    High-performance model with extended thinking

    "claude-4-sonnet-20250514"

    High-performance model with extended thinking

    "claude-sonnet-4-5"

    Our best model for real-world agents and coding

    "claude-sonnet-4-5-20250929"

    Our best model for real-world agents and coding

    "claude-opus-4-0"

    Our most capable model

    "claude-opus-4-20250514"

    Our most capable model

    "claude-4-opus-20250514"

    Our most capable model

    "claude-opus-4-1-20250805"

    Our most capable model

    "claude-3-opus-latest"

    Excels at writing and complex tasks

    "claude-3-opus-20240229"

    Excels at writing and complex tasks

    "claude-3-haiku-20240307"

    Our previous most fast and cost-effective

    UnionMember1 = string
    stop_reason: string

    The reason that we stopped.

    This may be one the following values:

    • "stop_sequence": we reached a stop sequence — either provided by you via the stop_sequences parameter, or a stop sequence built into the model
    • "max_tokens": we exceeded max_tokens_to_sample or the model's maximum
    type: "completion"

    Object type.

    For Text Completions, this is always "completion".

    Accepts one of the following:
    "completion"
    Create a Text Completion
    cURL
    curl https://api.anthropic.com/v1/complete \
        -H 'Content-Type: application/json' \
        -H "X-Api-Key: $ANTHROPIC_API_KEY" \
        -d '{
              "max_tokens_to_sample": 256,
              "model": "claude-3-7-sonnet-latest",
              "prompt": "\\n\\nHuman: Hello, world!\\n\\nAssistant:"
            }'
    {
      "id": "compl_018CKm6gsux7P8yMcwZbeCPw",
      "completion": " Hello! My name is Claude.",
      "model": "claude-2.1",
      "stop_reason": "stop_sequence",
      "type": "completion"
    }
    Returns Examples
    {
      "id": "compl_018CKm6gsux7P8yMcwZbeCPw",
      "completion": " Hello! My name is Claude.",
      "model": "claude-2.1",
      "stop_reason": "stop_sequence",
      "type": "completion"
    }
    © 2025 ANTHROPIC PBC

    Products

    • Claude
    • Claude Code
    • Max plan
    • Team plan
    • Enterprise plan
    • Download app
    • Pricing
    • Log in

    Features

    • Claude and Slack
    • Claude in Excel

    Models

    • Opus
    • Sonnet
    • Haiku

    Solutions

    • AI agents
    • Code modernization
    • Coding
    • Customer support
    • Education
    • Financial services
    • Government
    • Life sciences

    Claude Developer Platform

    • Overview
    • Developer docs
    • Pricing
    • Amazon Bedrock
    • Google Cloud’s Vertex AI
    • Console login

    Learn

    • Blog
    • Catalog
    • Courses
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Company

    • Anthropic
    • Careers
    • Economic Futures
    • Research
    • News
    • Responsible Scaling Policy
    • Security and compliance
    • Transparency

    Help and security

    • Availability
    • Status
    • Support center

    Terms and policies

    • Privacy policy
    • Responsible disclosure policy
    • Terms of service: Commercial
    • Terms of service: Consumer
    • Usage policy

    Products

    • Claude
    • Claude Code
    • Max plan
    • Team plan
    • Enterprise plan
    • Download app
    • Pricing
    • Log in

    Features

    • Claude and Slack
    • Claude in Excel

    Models

    • Opus
    • Sonnet
    • Haiku

    Solutions

    • AI agents
    • Code modernization
    • Coding
    • Customer support
    • Education
    • Financial services
    • Government
    • Life sciences

    Claude Developer Platform

    • Overview
    • Developer docs
    • Pricing
    • Amazon Bedrock
    • Google Cloud’s Vertex AI
    • Console login

    Learn

    • Blog
    • Catalog
    • Courses
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Company

    • Anthropic
    • Careers
    • Economic Futures
    • Research
    • News
    • Responsible Scaling Policy
    • Security and compliance
    • Transparency

    Help and security

    • Availability
    • Status
    • Support center

    Terms and policies

    • Privacy policy
    • Responsible disclosure policy
    • Terms of service: Commercial
    • Terms of service: Consumer
    • Usage policy
    © 2025 ANTHROPIC PBC