Create a Text Completion
[Legacy] Create a Text Completion.
The Text Completions API is a legacy API. We recommend using the Messages API going forward.
Future models and features will not be compatible with Text Completions. See our migration guide for guidance in migrating from Text Completions to Messages.
Header Parameters
Body Parameters
The maximum number of tokens to generate before stopping.
Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.
The prompt that you want Claude to complete.
For proper response generation you will need to format your prompt using alternating `
Human:and
Assistant:` conversational turns. For example:
"
Human: {userQuestion}
Assistant:"
See prompt validation and our guide to prompt design for more details.
Sequences that will cause the model to stop generating.
Our models stop on `"
Human:"`, and may include additional built-in stop sequences in the future. By providing the stop_sequences parameter, you may include additional strings that will cause the model to stop generating.
Whether to incrementally stream the response using server-sent events.
See streaming for details.
Amount of randomness injected into the response.
Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks.
Note that even with temperature of 0.0, the results will not be fully deterministic.
Only sample from the top K options for each subsequent token.
Used to remove "long tail" low probability responses. Learn more technical details here.
Recommended for advanced use cases only. You usually only need to use temperature.
Use nucleus sampling.
In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both.
Recommended for advanced use cases only. You usually only need to use temperature.
Returns
Create a Text Completion
curl https://api.anthropic.com/v1/complete \
-H 'Content-Type: application/json' \
-H "X-Api-Key: $ANTHROPIC_API_KEY" \
-d '{
"max_tokens_to_sample": 256,
"model": "claude-3-7-sonnet-latest",
"prompt": "\\n\\nHuman: Hello, world!\\n\\nAssistant:"
}'{
"id": "compl_018CKm6gsux7P8yMcwZbeCPw",
"completion": " Hello! My name is Claude.",
"model": "claude-2.1",
"stop_reason": "stop_sequence",
"type": "completion"
}Returns Examples
{
"id": "compl_018CKm6gsux7P8yMcwZbeCPw",
"completion": " Hello! My name is Claude.",
"model": "claude-2.1",
"stop_reason": "stop_sequence",
"type": "completion"
}