This library provides convenient access to the Anthropic REST API from server-side TypeScript or JavaScript.
For API feature documentation with code examples, see the API reference. This page covers TypeScript-specific SDK features and configuration.
npm install @anthropic-ai/sdkTypeScript >= 4.9 is supported.
The following runtimes are supported:
"node" environment ("jsdom" is not supported at this time).dangerouslyAllowBrowser to true.Note that React Native is not supported at this time.
If you are interested in other runtime environments, please open or upvote an issue on GitHub.
const client = new Anthropic({
apiKey: process.env["ANTHROPIC_API_KEY"] // This is the default and can be omitted
});
const message = await client.messages.create({
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, Claude" }],
model: "claude-opus-4-6"
});
console.log(message.content);This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:
const client = new Anthropic({
apiKey: process.env["ANTHROPIC_API_KEY"] // This is the default and can be omitted
});
const params: Anthropic.MessageCreateParams = {
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, Claude" }],
model: "claude-opus-4-6"
};
const message: Anthropic.Message = await client.messages.create(params);Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.
You can see the exact usage for a given request through the usage response property, e.g.
const message = await client.messages.create(/* ... */);
console.log(message.usage);
// { input_tokens: 25, output_tokens: 13 }The SDK provides support for streaming responses using Server Sent Events (SSE).
const client = new Anthropic();
const stream = await client.messages.create({
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, Claude" }],
model: "claude-opus-4-6",
stream: true
});
for await (const messageStreamEvent of stream) {
console.log(messageStreamEvent.type);
}If you need to cancel a stream, you can break from the loop or call stream.controller.abort().
This library provides several conveniences for streaming messages, for example:
const stream = anthropic.messages
.stream({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [
{
role: "user",
content: "Say hello there!"
}
]
})
.on("text", (text) => {
console.log(text);
});
const message = await stream.finalMessage();
console.log(message);Streaming with client.messages.stream(...) exposes various helpers for your convenience including event handlers and accumulation.
Alternatively, you can use client.messages.create({ ..., stream: true }) which only returns an async iterable of the events in the stream and thus uses less memory (it does not build up a final message object for you).
This SDK provides helpers for making it easy to create and run tools in the Messages API. You can use Zod schemas or JSON Schemas to describe the input to a tool. You can then run those tools using the client.messages.toolRunner() method. This method will handle passing the inputs generated by the chosen model into the right tool and passing the result back to the model.
For more details on tool use, see the tool use overview.
import { betaZodTool } from "@anthropic-ai/sdk/helpers/beta/zod";
import { z } from "zod";
const anthropic = new Anthropic();
const weatherTool = betaZodTool({
name: "get_weather",
inputSchema: z.object({
location: z.string()
}),
description: "Get the current weather in a given location",
run: (input) => {
return `The weather in ${input.location} is foggy and 60°F`;
}
});
const finalMessage = await anthropic.beta.messages.toolRunner({
model: "claude-opus-4-6",
max_tokens: 1000,
messages: [{ role: "user", content: "What is the weather in San Francisco?" }],
tools: [weatherTool]
});To report an error from a tool back to the model, throw a ToolError from the run function. Unlike a plain Error, ToolError accepts content blocks, allowing you to include images or other structured content in the error response:
import { ToolError } from "@anthropic-ai/sdk/lib/tools/BetaRunnableTool";
const screenshotTool = betaZodTool({
name: "take_screenshot",
inputSchema: z.object({ url: z.string() }),
run: async (input) => {
if (!isValidUrl(input.url)) {
throw new ToolError(`Invalid URL: ${input.url}`);
}
const result = await takeScreenshot(input.url);
if (result.error) {
// Include the error screenshot so the model can see what went wrong
throw new ToolError([
{ type: "text", text: `Failed to load page: ${result.error}` },
{
type: "image",
source: { type: "base64", data: result.screenshot, media_type: "image/png" }
}
]);
}
return {
type: "image",
source: { type: "base64", data: result.screenshot, media_type: "image/png" }
};
}
});If a plain Error is thrown, the message will be converted to a text content block.
This SDK provides support for tool use, aka function calling. More details can be found in the tool use overview.
This SDK provides helpers for integrating with Model Context Protocol (MCP) servers. These helpers convert MCP types to Claude API types, reducing boilerplate when working with MCP tools, prompts, and resources.
The Claude API also supports an mcp_servers parameter that lets Claude connect directly to remote MCP servers. Use mcp_servers when you have remote servers accessible via URL and only need tool support. Use the MCP helpers when you need local MCP servers, prompts, resources, or more control over the MCP connection.
For the Claude API's built-in remote MCP server support, see MCP Connector.
import {
mcpTools,
mcpMessages,
mcpResourceToContent,
mcpResourceToFile
} from "@anthropic-ai/sdk/helpers/beta/mcp";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
const anthropic = new Anthropic();
// Connect to an MCP server
const transport = new StdioClientTransport({ command: "mcp-server", args: [] });
const mcpClient = new Client({ name: "my-client", version: "1.0.0" });
await mcpClient.connect(transport);
// Use MCP prompts
const { messages } = await mcpClient.getPrompt({ name: "my-prompt" });
const response = await anthropic.beta.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: mcpMessages(messages)
});
// Use MCP tools with toolRunner
const { tools } = await mcpClient.listTools();
const runner = await anthropic.beta.messages.toolRunner({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Use the available tools" }],
tools: mcpTools(tools, mcpClient)
});
// Use MCP resources as content
const resource = await mcpClient.readResource({ uri: "file:///path/to/doc.txt" });
await anthropic.beta.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [
{
role: "user",
content: [
mcpResourceToContent(resource),
{ type: "text", text: "Summarize this document" }
]
}
]
});
// Upload MCP resources as files
const fileResource = await mcpClient.readResource({ uri: "file:///path/to/data.json" });
await anthropic.beta.files.upload({ file: mcpResourceToFile(fileResource) });The conversion functions throw UnsupportedMCPValueError if an MCP value isn't supported by the Claude API (e.g., unsupported content type, unsupported MIME type, non-http/https resource link).
This SDK provides support for the Message Batches API under the client.messages.batches namespace.
Message Batches takes an array of requests, where each object has a custom_id identifier, and the exact same request params as the standard Messages API:
await client.messages.batches.create({
requests: [
{
custom_id: "my-first-request",
params: {
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, world" }]
}
},
{
custom_id: "my-second-request",
params: {
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Hi again, friend" }]
}
}
]
});Once a Message Batch has been processed, indicated by .processing_status === 'ended', you can access the results with .batches.results()
const results = await client.messages.batches.results(batch_id);
for await (const entry of results) {
if (entry.result.type === "succeeded") {
console.log(entry.result.message.content);
}
}Request parameters that correspond to file uploads can be passed in many different forms:
File (or an object with the same structure)fetch Response (or an object with the same structure)fs.ReadStreamtoFile helperSet the content-type explicitly as the files API will not infer it for you:
import fs from "fs";
import Anthropic, { toFile } from "@anthropic-ai/sdk";
const client = new Anthropic();
// If you have access to Node `fs` we recommend using `fs.createReadStream()`:
await client.beta.files.upload({
file: await toFile(fs.createReadStream("/path/to/file"), undefined, {
type: "application/json"
}),
betas: ["files-api-2025-04-14"]
});
// Or if you have the web `File` API you can pass a `File` instance:
await client.beta.files.upload({
file: new File(["my bytes"], "file.txt", { type: "text/plain" }),
betas: ["files-api-2025-04-14"]
});
// You can also pass a `fetch` `Response`:
await client.beta.files.upload({
file: await fetch("https://somesite/file"),
betas: ["files-api-2025-04-14"]
});
// Or a `Buffer` / `Uint8Array`
await client.beta.files.upload({
file: await toFile(Buffer.from("my bytes"), "file", { type: "text/plain" }),
betas: ["files-api-2025-04-14"]
});
await client.beta.files.upload({
file: await toFile(new Uint8Array([0, 1, 2]), "file", { type: "text/plain" }),
betas: ["files-api-2025-04-14"]
});When the library is unable to connect to the API,
or if the API returns a non-success status code (i.e., 4xx or 5xx response),
a subclass of APIError will be thrown:
const message = await client.messages
.create({
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, Claude" }],
model: "claude-opus-4-6"
})
.catch(async (err) => {
if (err instanceof Anthropic.APIError) {
console.log(err.status); // 400
console.log(err.name); // BadRequestError
console.log(err.headers); // {server: 'nginx', ...}
} else {
throw err;
}
});Error codes are as follows:
| Status Code | Error Type |
|---|---|
| 400 | BadRequestError |
| 401 | AuthenticationError |
| 403 | PermissionDeniedError |
| 404 | NotFoundError |
| 422 | UnprocessableEntityError |
| 429 | RateLimitError |
| >=500 | InternalServerError |
| N/A | APIConnectionError |
For more information on debugging requests, see these docs
All object responses in the SDK provide a _request_id property which is added from the request-id response header so that you can quickly log failing requests and report them back to Anthropic.
const message = await client.messages.create({
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, Claude" }],
model: "claude-opus-4-6"
});
console.log(message._request_id); // req_018EeWyXxfu5pfWkrYcMdjWGCertain errors will be automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried by default.
You can use the maxRetries option to configure or disable this:
// Configure the default for all requests:
const client = new Anthropic({
maxRetries: 0 // default is 2
});
// Or, configure per-request:
await client.messages.create(
{
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, Claude" }],
model: "claude-opus-4-6"
},
{ maxRetries: 5 }
);By default requests time out after 10 minutes. However if you have specified a large max_tokens value and are
not streaming, the default timeout will be calculated dynamically using the formula:
const minimum = 10 * 60;
const calculated = (60 * 60 * maxTokens) / 128_000;
return calculated < minimum ? minimum * 1000 : calculated * 1000;which will result in a timeout up to 60 minutes, scaled by the max_tokens parameter, unless overridden at the request or client level.
You can configure this with a timeout option:
// Configure the default for all requests:
const client = new Anthropic({
timeout: 20 * 1000 // 20 seconds (default is 10 minutes)
});
// Override per-request:
await client.messages.create(
{
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, Claude" }],
model: "claude-opus-4-6"
},
{ timeout: 5 * 1000 }
);On timeout, an APIConnectionTimeoutError is thrown.
Note that requests which time out will be retried twice by default.
Consider using the streaming Messages API for longer running requests.
Avoid setting a large max_tokens value without using streaming.
Some networks may drop idle connections after a certain period of time, which
can cause the request to fail or timeout without receiving a response from Anthropic.
This SDK will also throw an error if a non-streaming request is expected to be above roughly 10 minutes long.
Passing stream: true or overriding the timeout option at the client or request level disables this error.
An expected request latency longer than the timeout for a non-streaming request will result in the client terminating the connection and retrying without receiving a response.
When supported by the fetch implementation, the SDK sets a TCP socket keep-alive option in order
to reduce the impact of idle connection timeouts on some networks.
This can be overridden by configuring a custom proxy.
List methods in the Claude API are paginated.
You can use the for await ... of syntax to iterate through items across all pages:
async function fetchAllMessageBatches(params: Record<string, unknown>) {
const allMessageBatches = [];
// Automatically fetches more pages as needed.
for await (const messageBatch of client.messages.batches.list({ limit: 20 })) {
allMessageBatches.push(messageBatch);
}
return allMessageBatches;
}Alternatively, you can request a single page at a time:
let page = await client.messages.batches.list({ limit: 20 });
for (const messageBatch of page.data) {
console.log(messageBatch);
}
// Convenience methods are provided for manually paginating:
while (page.hasNextPage()) {
page = await page.getNextPage();
// ...
}The SDK automatically sends the anthropic-version header set to 2023-06-01.
If you need to, you can override it by setting default headers on a per-request basis.
Be aware that doing so may result in incorrect types and other unexpected or undefined behavior in the SDK.
const client = new Anthropic();
const message = await client.messages.create(
{
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, Claude" }],
model: "claude-opus-4-6"
},
{ headers: { "anthropic-version": "My-Custom-Value" } }
);The "raw" Response returned by fetch() can be accessed through the .asResponse() method on the APIPromise type that all methods return.
This method returns as soon as the headers for a successful response are received and does not consume the response body, so you are free to write custom parsing or streaming logic.
You can also use the .withResponse() method to get the raw Response along with the parsed data.
Unlike .asResponse() this method consumes the body, returning once it is parsed.
const client = new Anthropic();
const response = await client.messages
.create({
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, Claude" }],
model: "claude-opus-4-6"
})
.asResponse();
console.log(response.headers.get("X-My-Header"));
console.log(response.statusText); // access the underlying Response object
const { data: message, response: raw } = await client.messages
.create({
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, Claude" }],
model: "claude-opus-4-6"
})
.withResponse();
console.log(raw.headers.get("X-My-Header"));
console.log(message.content);All log messages are intended for debugging only. The format and content of log messages may change between releases.
The log level can be configured in two ways:
ANTHROPIC_LOG environment variablelogLevel client option (overrides the environment variable if set)const client = new Anthropic({
logLevel: "debug" // Show all log messages
});Available log levels, from most to least verbose:
'debug' - Show debug messages, info, warnings, and errors'info' - Show info messages, warnings, and errors'warn' - Show warnings and errors (default)'error' - Show only errors'off' - Disable all loggingAt the 'debug' level, all HTTP requests and responses are logged, including headers and bodies.
Some authentication-related headers are redacted, but sensitive data in request and response bodies
may still be visible.
By default, this library logs to globalThis.console. You can also provide a custom logger.
Most logging libraries are supported, including pino, winston, bunyan, consola, signale, and @std/log. If your logger doesn't work, please open an issue.
When providing a custom logger, the logLevel option still controls which messages are emitted, messages
below the configured level will not be sent to your logger.
import pino from "pino";
const logger = pino();
const client = new Anthropic({
logger: logger.child({ name: "Anthropic" }),
logLevel: "debug" // Send all messages to pino, allowing it to filter
});This library is typed for convenient access to the documented API. If you need to access undocumented endpoints, params, or response properties, the library can still be used.
To make requests to undocumented endpoints, you can use client.get, client.post, and other HTTP verbs.
Options on the client, such as retries, will be respected when making these requests.
await client.post("/some/path", {
body: { some_prop: "foo" },
query: { some_query_arg: "bar" }
});To make requests using undocumented parameters, you may use // @ts-expect-error on the undocumented
parameter. This library doesn't validate at runtime that the request matches the type, so any extra values you
send will be sent as-is.
client.messages.create({
// ...
// @ts-expect-error baz is not yet public
baz: "undocumented option"
});For requests with the GET verb, any extra params will be in the query, all other requests will send the
extra param in the body.
If you want to explicitly send an extra argument, you can do so with the query, body, and headers request
options.
To access undocumented response properties, you may access the response object with // @ts-expect-error on
the response object, or cast the response object to the requisite type. Like the request params, the SDK does not
validate or strip extra properties from the response from the API.
By default, this library expects a global fetch function is defined.
If you want to use a different fetch function, you can either polyfill the global:
import fetch from "my-fetch";
globalThis.fetch = fetch;Or pass it to the client:
import fetch from "my-fetch";
const client = new Anthropic({ fetch });If you want to set custom fetch options without overriding the fetch function, you can provide a fetchOptions object when instantiating the client or making a request. (Request-specific options override client options.)
const client = new Anthropic({
fetchOptions: {
// `RequestInit` options
}
});To modify proxy behavior, you can provide custom fetchOptions that add runtime-specific proxy
options to requests:
Beta features are available before general release to get early feedback and test new functionality. You can check the availability of all of Claude's capabilities and tools in the build with Claude overview.
You can access most beta API features through the beta property of the client. To enable a particular beta feature, you need to add the appropriate beta header to the betas field when creating a message.
For example, to use the Files API:
const client = new Anthropic();
const response = await client.beta.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [
{
role: "user",
content: [
{ type: "text", text: "Please summarize this document for me." },
{
type: "document",
source: {
type: "file",
file_id: "file_abc123"
}
}
]
}
],
betas: ["files-api-2025-04-14"]
});For detailed platform setup guides with code examples, see:
The TypeScript SDK supports Bedrock, Vertex AI, and Foundry through separate npm packages:
npm install @anthropic-ai/bedrock-sdk: Provides AnthropicBedrock clientnpm install @anthropic-ai/vertex-sdk: Provides AnthropicVertex clientnpm install @anthropic-ai/foundry-sdk: Provides AnthropicFoundry clientThis package generally follows SemVer conventions, though certain backwards-incompatible changes may be released as minor versions:
Backwards-compatibility is taken seriously to ensure you can rely on a smooth upgrade experience.
See the GitHub repository for FAQs, issues, and community support.
Was this page helpful?