Loading...
    • Developer Guide
    • API Reference
    • MCP
    • Resources
    • Release Notes
    Search...
    ⌘K
    Using the API
    API overviewBeta headersErrors
    Client SDKs
    Client SDKs overviewPython SDKTypeScript SDKJava SDKGo SDKRuby SDKC# SDKPHP SDK
    Messages
    Create a Message
    Count tokens in a Message
    Models
    List Models
    Get a Model
    Beta
    Admin
    Completions
    Create a Text Completion
    Support & configuration
    Rate limitsService tiersVersionsIP addressesSupported regionsOpenAI SDK compatibility
    Console
    Log in
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...

    Solutions

    • AI agents
    • Code modernization
    • Coding
    • Customer support
    • Education
    • Financial services
    • Government
    • Life sciences

    Partners

    • Amazon Bedrock
    • Google Cloud's Vertex AI

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Company

    • Anthropic
    • Careers
    • Economic Futures
    • Research
    • News
    • Responsible Scaling Policy
    • Security and compliance
    • Transparency

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Help and security

    • Availability
    • Status
    • Support
    • Discord

    Terms and policies

    • Privacy policy
    • Responsible disclosure policy
    • Terms of service: Commercial
    • Terms of service: Consumer
    • Usage policy
    Client SDKs

    TypeScript SDK

    Install and configure the Anthropic TypeScript SDK for Node.js, Deno, Bun, and browser environments

    This library provides convenient access to the Anthropic REST API from server-side TypeScript or JavaScript.

    For API feature documentation with code examples, see the API reference. This page covers TypeScript-specific SDK features and configuration.

    Installation

    npm install @anthropic-ai/sdk

    Requirements

    TypeScript >= 4.9 is supported.

    The following runtimes are supported:

    • Node.js 20 LTS or later (non-EOL) versions.
    • Deno v1.28.0 or higher.
    • Bun 1.0 or later.
    • Cloudflare Workers.
    • Vercel Edge Runtime.
    • Jest 28 or greater with the "node" environment ("jsdom" is not supported at this time).
    • Nitro v2.6 or greater.
    • Web browsers: disabled by default to avoid exposing your secret API credentials (see API key best practices). Enable browser support by explicitly setting dangerouslyAllowBrowser to true.

    Note that React Native is not supported at this time.

    If you are interested in other runtime environments, please open or upvote an issue on GitHub.

    Usage

    import Anthropic from '@anthropic-ai/sdk';
    
    const client = new Anthropic({
      apiKey: process.env['ANTHROPIC_API_KEY'], // This is the default and can be omitted
    });
    
    const message = await client.messages.create({
      max_tokens: 1024,
      messages: [{ role: 'user', content: 'Hello, Claude' }],
      model: 'claude-opus-4-6',
    });
    
    console.log(message.content);

    Request & Response types

    This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:

    import Anthropic from '@anthropic-ai/sdk';
    
    const client = new Anthropic({
      apiKey: process.env['ANTHROPIC_API_KEY'], // This is the default and can be omitted
    });
    
    const params: Anthropic.MessageCreateParams = {
      max_tokens: 1024,
      messages: [{ role: 'user', content: 'Hello, Claude' }],
      model: 'claude-opus-4-6',
    };
    const message: Anthropic.Message = await client.messages.create(params);

    Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.

    Counting Tokens

    You can see the exact usage for a given request through the usage response property, e.g.

    const message = await client.messages.create(...)
    console.log(message.usage)
    // { input_tokens: 25, output_tokens: 13 }

    Streaming responses

    We provide support for streaming responses using Server Sent Events (SSE).

    import Anthropic from '@anthropic-ai/sdk';
    
    const client = new Anthropic();
    
    const stream = await client.messages.create({
      max_tokens: 1024,
      messages: [{ role: 'user', content: 'Hello, Claude' }],
      model: 'claude-opus-4-6',
      stream: true,
    });
    for await (const messageStreamEvent of stream) {
      console.log(messageStreamEvent.type);
    }

    Or break inside the iteration loop to cancel.

    Streaming Helpers

    This library provides several conveniences for streaming messages, for example:

    import Anthropic from '@anthropic-ai/sdk';
    
    const anthropic = new Anthropic();
    
    async function main() {
      const stream = anthropic.messages
        .stream({
          model: 'claude-opus-4-6',
          max_tokens: 1024,
          messages: [
            {
              role: 'user',
              content: 'Say hello there!',
            },
          ],
        })
        .on('text', (text) => {
          console.log(text);
        });
    
      const message = await stream.finalMessage();
      console.log(message);
    }
    
    main();

    Streaming with client.messages.stream(...) exposes various helpers for your convenience including event handlers and accumulation.

    Alternatively, you can use client.messages.create({ ..., stream: true }) which only returns an async iterable of the events in the stream and thus uses less memory (it does not build up a final message object for you).

    Tool Helpers

    This SDK provides helpers for making it easy to create and run tools in the Messages API. You can use Zod schemas or JSON Schemas to describe the input to a tool. You can then run those tools using the client.messages.toolRunner() method. This method will handle passing the inputs generated by the chosen model into the right tool and passing the result back to the model.

    For more details on tool use, see the tool use overview.

    import Anthropic from '@anthropic-ai/sdk';
    
    import { betaZodTool } from '@anthropic-ai/sdk/helpers/beta/zod';
    import { z } from 'zod';
    
    const anthropic = new Anthropic();
    
    const weatherTool = betaZodTool({
      name: 'get_weather',
      inputSchema: z.object({
        location: z.string(),
      }),
      description: 'Get the current weather in a given location',
      run: (input) => {
        return `The weather in ${input.location} is foggy and 60°F`;
      },
    });
    
    const finalMessage = await anthropic.beta.messages.toolRunner({
      model: 'claude-opus-4-6',
      max_tokens: 1000,
      messages: [{ role: 'user', content: 'What is the weather in San Francisco?' }],
      tools: [weatherTool],
    });

    Tool use

    This SDK provides support for tool use, aka function calling. More details can be found in the tool use overview.

    Message Batches

    This SDK provides support for the Message Batches API under the client.messages.batches namespace.

    Creating a batch

    Message Batches takes an array of requests, where each object has a custom_id identifier, and the exact same request params as the standard Messages API:

    await anthropic.messages.batches.create({
      requests: [
        {
          custom_id: 'my-first-request',
          params: {
            model: 'claude-opus-4-6',
            max_tokens: 1024,
            messages: [{ role: 'user', content: 'Hello, world' }],
          },
        },
        {
          custom_id: 'my-second-request',
          params: {
            model: 'claude-opus-4-6',
            max_tokens: 1024,
            messages: [{ role: 'user', content: 'Hi again, friend' }],
          },
        },
      ],
    });

    Getting results from a batch

    Once a Message Batch has been processed, indicated by .processing_status === 'ended', you can access the results with .batches.results()

    const results = await anthropic.messages.batches.results(batch_id);
    for await (const entry of results) {
      if (entry.result.type === 'succeeded') {
        console.log(entry.result.message.content);
      }
    }

    File uploads

    Request parameters that correspond to file uploads can be passed in many different forms:

    • File (or an object with the same structure)
    • a fetch Response (or an object with the same structure)
    • an fs.ReadStream
    • the return value of our toFile helper

    Note that we recommend you set the content-type explicitly as the files API will not infer it for you:

    import fs from 'fs';
    import Anthropic, { toFile } from '@anthropic-ai/sdk';
    
    const client = new Anthropic();
    
    // If you have access to Node `fs` we recommend using `fs.createReadStream()`:
    await client.beta.files.upload({
      file: await toFile(fs.createReadStream('/path/to/file'), undefined, { type: 'application/json' }),
      betas: ['files-api-2025-04-14'],
    });
    
    // Or if you have the web `File` API you can pass a `File` instance:
    await client.beta.files.upload({
      file: new File(['my bytes'], 'file.txt', { type: 'text/plain' }),
      betas: ['files-api-2025-04-14'],
    });
    // You can also pass a `fetch` `Response`:
    await client.beta.files.upload({
      file: await fetch('https://somesite/file'),
      betas: ['files-api-2025-04-14'],
    });
    
    // Or a `Buffer` / `Uint8Array`
    await client.beta.files.upload({
      file: await toFile(Buffer.from('my bytes'), 'file', { type: 'text/plain' }),
      betas: ['files-api-2025-04-14'],
    });
    await client.beta.files.upload({
      file: await toFile(new Uint8Array([0, 1, 2]), 'file', { type: 'text/plain' }),
      betas: ['files-api-2025-04-14'],
    });

    Handling errors

    When the library is unable to connect to the API, or if the API returns a non-success status code (i.e., 4xx or 5xx response), a subclass of APIError will be thrown:

    import Anthropic from '@anthropic-ai/sdk';
    
    const client = new Anthropic();
    
    const message = await client.messages
      .create({
        max_tokens: 1024,
        messages: [{ role: 'user', content: 'Hello, Claude' }],
        model: 'claude-opus-4-6',
      })
      .catch(async (err) => {
        if (err instanceof Anthropic.APIError) {
          console.log(err.status); // 400
          console.log(err.name); // BadRequestError
          console.log(err.headers); // {server: 'nginx', ...}
        } else {
          throw err;
        }
      });

    Error codes are as follows:

    Status CodeError Type
    400BadRequestError
    401AuthenticationError
    403PermissionDeniedError
    404NotFoundError
    422UnprocessableEntityError
    429RateLimitError
    >=500InternalServerError
    N/AAPIConnectionError

    Request IDs

    For more information on debugging requests, see these docs

    All object responses in the SDK provide a _request_id property which is added from the request-id response header so that you can quickly log failing requests and report them back to Anthropic.

    const message = await client.messages.create({
      max_tokens: 1024,
      messages: [{ role: 'user', content: 'Hello, Claude' }],
      model: 'claude-opus-4-6',
    });
    console.log(message._request_id); // req_018EeWyXxfu5pfWkrYcMdjWG

    Retries

    Certain errors will be automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried by default.

    You can use the maxRetries option to configure or disable this:

    // Configure the default for all requests:
    const client = new Anthropic({
      maxRetries: 0, // default is 2
    });
    
    // Or, configure per-request:
    await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-opus-4-6' }, {
      maxRetries: 5,
    });

    Timeouts

    By default requests time out after 10 minutes. However if you have specified a large max_tokens value and are not streaming, the default timeout will be calculated dynamically using the formula:

    const minimum = 10 * 60;
    const calculated = (60 * 60 * maxTokens) / 128_000;
    return calculated < minimum ? minimum * 1000 : calculated * 1000;

    which will result in a timeout up to 60 minutes, scaled by the max_tokens parameter, unless overridden at the request or client level.

    You can configure this with a timeout option:

    // Configure the default for all requests:
    const client = new Anthropic({
      timeout: 20 * 1000, // 20 seconds (default is 10 minutes)
    });
    
    // Override per-request:
    await client.messages.create({ max_tokens: 1024, messages: [{ role: 'user', content: 'Hello, Claude' }], model: 'claude-opus-4-6' }, {
      timeout: 5 * 1000,
    });

    On timeout, an APIConnectionTimeoutError is thrown.

    Note that requests which time out will be retried twice by default.

    Long Requests

    We highly encourage you use the streaming Messages API for longer running requests.

    We do not recommend setting a large max_tokens values without using streaming. Some networks may drop idle connections after a certain period of time, which can cause the request to fail or timeout without receiving a response from Anthropic.

    This SDK will also throw an error if a non-streaming request is expected to be above roughly 10 minutes long. Passing stream: true or overriding the timeout option at the client or request level disables this error.

    An expected request latency longer than the timeout for a non-streaming request will result in the client terminating the connection and retrying without receiving a response.

    When supported by the fetch implementation, we set a TCP socket keep-alive option in order to reduce the impact of idle connection timeouts on some networks. This can be overridden by configuring a custom proxy.

    Auto-pagination

    List methods in the Claude API are paginated. You can use the for await ... of syntax to iterate through items across all pages:

    async function fetchAllMessageBatches(params) {
      const allMessageBatches = [];
      // Automatically fetches more pages as needed.
      for await (const messageBatch of client.messages.batches.list({ limit: 20 })) {
        allMessageBatches.push(messageBatch);
      }
      return allMessageBatches;
    }

    Alternatively, you can request a single page at a time:

    let page = await client.messages.batches.list({ limit: 20 });
    for (const messageBatch of page.data) {
      console.log(messageBatch);
    }
    
    // Convenience methods are provided for manually paginating:
    while (page.hasNextPage()) {
      page = await page.getNextPage();
      // ...
    }

    Default Headers

    We automatically send the anthropic-version header set to 2023-06-01.

    If you need to, you can override it by setting default headers on a per-request basis.

    Be aware that doing so may result in incorrect types and other unexpected or undefined behavior in the SDK.

    import Anthropic from '@anthropic-ai/sdk';
    
    const client = new Anthropic();
    
    const message = await client.messages.create(
      {
        max_tokens: 1024,
        messages: [{ role: 'user', content: 'Hello, Claude' }],
        model: 'claude-opus-4-6',
      },
      { headers: { 'anthropic-version': 'My-Custom-Value' } },
    );

    Advanced Usage

    Accessing raw Response data (e.g., headers)

    The "raw" Response returned by fetch() can be accessed through the .asResponse() method on the APIPromise type that all methods return. This method returns as soon as the headers for a successful response are received and does not consume the response body, so you are free to write custom parsing or streaming logic.

    You can also use the .withResponse() method to get the raw Response along with the parsed data. Unlike .asResponse() this method consumes the body, returning once it is parsed.

    const client = new Anthropic();
    
    const response = await client.messages
      .create({
        max_tokens: 1024,
        messages: [{ role: 'user', content: 'Hello, Claude' }],
        model: 'claude-opus-4-6',
      })
      .asResponse();
    console.log(response.headers.get('X-My-Header'));
    console.log(response.statusText); // access the underlying Response object
    
    const { data: message, response: raw } = await client.messages
      .create({
        max_tokens: 1024,
        messages: [{ role: 'user', content: 'Hello, Claude' }],
        model: 'claude-opus-4-6',
      })
      .withResponse();
    console.log(raw.headers.get('X-My-Header'));
    console.log(message.content);

    Logging

    All log messages are intended for debugging only. The format and content of log messages may change between releases.

    Log levels

    The log level can be configured in two ways:

    1. Via the ANTHROPIC_LOG environment variable
    2. Using the logLevel client option (overrides the environment variable if set)
    import Anthropic from '@anthropic-ai/sdk';
    
    const client = new Anthropic({
      logLevel: 'debug', // Show all log messages
    });

    Available log levels, from most to least verbose:

    • 'debug' - Show debug messages, info, warnings, and errors
    • 'info' - Show info messages, warnings, and errors
    • 'warn' - Show warnings and errors (default)
    • 'error' - Show only errors
    • 'off' - Disable all logging

    At the 'debug' level, all HTTP requests and responses are logged, including headers and bodies. Some authentication-related headers are redacted, but sensitive data in request and response bodies may still be visible.

    Custom logger

    By default, this library logs to globalThis.console. You can also provide a custom logger. Most logging libraries are supported, including pino, winston, bunyan, consola, signale, and @std/log. If your logger doesn't work, please open an issue.

    When providing a custom logger, the logLevel option still controls which messages are emitted, messages below the configured level will not be sent to your logger.

    import Anthropic from '@anthropic-ai/sdk';
    import pino from 'pino';
    
    const logger = pino();
    
    const client = new Anthropic({
      logger: logger.child({ name: 'Anthropic' }),
      logLevel: 'debug', // Send all messages to pino, allowing it to filter
    });

    Making custom/undocumented requests

    This library is typed for convenient access to the documented API. If you need to access undocumented endpoints, params, or response properties, the library can still be used.

    Undocumented endpoints

    To make requests to undocumented endpoints, you can use client.get, client.post, and other HTTP verbs. Options on the client, such as retries, will be respected when making these requests.

    await client.post('/some/path', {
      body: { some_prop: 'foo' },
      query: { some_query_arg: 'bar' },
    });

    Undocumented request params

    To make requests using undocumented parameters, you may use // @ts-expect-error on the undocumented parameter. This library doesn't validate at runtime that the request matches the type, so any extra values you send will be sent as-is.

    client.messages.create({
      // ...
      // @ts-expect-error baz is not yet public
      baz: 'undocumented option',
    });

    For requests with the GET verb, any extra params will be in the query, all other requests will send the extra param in the body.

    If you want to explicitly send an extra argument, you can do so with the query, body, and headers request options.

    Undocumented response properties

    To access undocumented response properties, you may access the response object with // @ts-expect-error on the response object, or cast the response object to the requisite type. Like the request params, we do not validate or strip extra properties from the response from the API.

    Customizing the fetch client

    By default, this library expects a global fetch function is defined.

    If you want to use a different fetch function, you can either polyfill the global:

    import fetch from 'my-fetch';
    
    globalThis.fetch = fetch;

    Or pass it to the client:

    import Anthropic from '@anthropic-ai/sdk';
    import fetch from 'my-fetch';
    
    const client = new Anthropic({ fetch });

    Fetch options

    If you want to set custom fetch options without overriding the fetch function, you can provide a fetchOptions object when instantiating the client or making a request. (Request-specific options override client options.)

    import Anthropic from '@anthropic-ai/sdk';
    
    const client = new Anthropic({
      fetchOptions: {
        // `RequestInit` options
      },
    });

    Configuring proxies

    To modify proxy behavior, you can provide custom fetchOptions that add runtime-specific proxy options to requests:

    Beta Features

    We introduce beta features before they are generally available to get early feedback and test new functionality. You can check the availability of all of Claude's capabilities and tools in the build with Claude overview.

    You can access most beta API features through the beta property of the client. To enable a particular beta feature, you need to add the appropriate beta header to the betas field when creating a message.

    For example, to use code execution:

    import Anthropic from '@anthropic-ai/sdk';
    
    const client = new Anthropic();
    const response = await client.beta.messages.create({
      max_tokens: 1024,
      model: 'claude-opus-4-6',
      messages: [
        {
          role: 'user',
          content: [
            {
              type: 'text',
              text: "What's 4242424242 * 4242424242?.",
            },
          ],
        },
      ],
      tools: [
        {
          name: 'code_execution',
          type: 'code_execution_20250522',
        },
      ],
      betas: ['code-execution-2025-05-22'],
    });

    Runtime support

    Platform integrations

    For detailed platform setup guides, see:

    • Amazon Bedrock
    • Google Vertex AI
    • Microsoft Azure / Foundry

    Amazon Bedrock

    We provide support for the Anthropic Bedrock API through a separate package.

    npm install @anthropic-ai/bedrock-sdk
    import { AnthropicBedrock } from '@anthropic-ai/bedrock-sdk';
    
    const client = new AnthropicBedrock();
    
    const message = await client.messages.create({
      model: 'anthropic.claude-opus-4-6-v1',
      max_tokens: 1024,
      messages: [{ role: 'user', content: 'Hello, Claude' }],
    });

    Google Vertex AI

    We provide support for the Anthropic Vertex AI API through a separate package.

    npm install @anthropic-ai/vertex-sdk
    import { AnthropicVertex } from '@anthropic-ai/vertex-sdk';
    
    const client = new AnthropicVertex();
    
    const message = await client.messages.create({
      model: 'claude-opus-4-6',
      max_tokens: 1024,
      messages: [{ role: 'user', content: 'Hello, Claude' }],
    });

    Microsoft Azure / Foundry

    For information on using Claude through Microsoft Azure and Azure AI Foundry, see Claude in Microsoft Foundry.

    Frequently Asked Questions

    See the GitHub repository for FAQs, issues, and community support.

    Additional resources

    • GitHub repository
    • API reference
    • Streaming guide
    • Tool use guide

    Was this page helpful?

    • Installation
    • Requirements
    • Usage
    • Request & Response types
    • Counting Tokens
    • Streaming responses
    • Streaming Helpers
    • Tool Helpers
    • Tool use
    • Message Batches
    • Creating a batch
    • Getting results from a batch
    • File uploads
    • Handling errors
    • Request IDs
    • Retries
    • Timeouts
    • Long Requests
    • Auto-pagination
    • Default Headers
    • Advanced Usage
    • Accessing raw Response data (e.g., headers)
    • Logging
    • Making custom/undocumented requests
    • Customizing the fetch client
    • Fetch options
    • Configuring proxies
    • Beta Features
    • Runtime support
    • Platform integrations
    • Amazon Bedrock
    • Google Vertex AI
    • Microsoft Azure / Foundry
    • Frequently Asked Questions
    • Additional resources