Was this page helpful?
Ketika Anda membuat permintaan ke Messages API, respons Claude mencakup bidang stop_reason yang menunjukkan mengapa model berhenti menghasilkan responsnya. Memahami nilai-nilai ini sangat penting untuk membangun aplikasi yang robust yang menangani berbagai jenis respons dengan tepat.
Untuk detail tentang stop_reason dalam respons API, lihat referensi Messages API.
Bidang stop_reason adalah bagian dari setiap respons Messages API yang berhasil. Tidak seperti kesalahan, yang menunjukkan kegagalan dalam memproses permintaan Anda, stop_reason memberi tahu Anda mengapa Claude berhasil menyelesaikan pembuatan responsnya.
{
"id": "msg_01234",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Here's the answer to your question..."
}
],
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 100,
"output_tokens": 50
}
}Alasan penghentian yang paling umum. Menunjukkan Claude menyelesaikan responsnya secara alami.
from anthropic import Anthropic
client = Anthropic()
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
)
if response.stop_reason == "end_turn":
# Process the complete response
print(response.content[0].text)Kadang-kadang Claude mengembalikan respons kosong (tepat 2-3 token tanpa konten) dengan stop_reason: "end_turn". Ini biasanya terjadi ketika Claude menginterpretasikan bahwa giliran asisten sudah selesai, terutama setelah hasil alat.
Penyebab umum:
Cara mencegah respons kosong:
# INCORRECT: Adding text immediately after tool_result
messages = [
{"role": "user", "content": "Calculate the sum of 1234 and 5678"},
{
"role": "assistant",
"content": [
{
"type": "tool_use",
"id": "toolu_123",
"name": "calculator",
"input": {"operation": "add", "a": 1234, "b": 5678},
}
],
},
{
"role": "user",
"content": [
{"type": "tool_result", "tool_use_id": "toolu_123", "content": "6912"},
{
"type": "text",
"text": "Here's the result", # Don't add text after tool_result
},
],
},
]
# CORRECT: Send tool results directly without additional text
messages = [
{"role": "user", "content": "Calculate the sum of 1234 and 5678"},
{
"role": "assistant",
"content": [
{
"type": "tool_use",
"id": "toolu_123",
"name": "calculator",
"input": {"operation": "add", "a": 1234, "b": 5678},
}
],
},
{
"role": "user",
"content": [
{"type": "tool_result", "tool_use_id": "toolu_123", "content": "6912"}
],
}, # Just the tool_result, no additional text
]
# If you still get empty responses after fixing the above:
def handle_empty_response(client, messages):
response = client.messages.create(
model="claude-opus-4-7", max_tokens=1024, messages=messages
)
# Check if response is empty
if response.stop_reason == "end_turn" and not response.content:
# INCORRECT: Don't just retry with the empty response
# This won't work because Claude already decided it's done
# CORRECT: Add a continuation prompt in a NEW user message
messages.append({"role": "user", "content": "Please continue"})
response = client.messages.create(
model="claude-opus-4-7", max_tokens=1024, messages=messages
)
return responsePraktik terbaik:
Claude berhenti karena mencapai batas max_tokens yang ditentukan dalam permintaan Anda.
# Request with limited tokens
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=10,
messages=[{"role": "user", "content": "Explain quantum physics"}],
)
if response.stop_reason == "max_tokens":
# Response was truncated
print("Response was cut off at token limit")
# Consider making another request to continueJika respons Claude terpotong karena mencapai batas max_tokens, dan respons yang terpotong berisi blok penggunaan alat yang tidak lengkap, Anda perlu mencoba ulang permintaan dengan nilai max_tokens yang lebih tinggi untuk mendapatkan penggunaan alat lengkap.
Claude menemukan salah satu urutan penghentian kustom Anda.
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=1024,
stop_sequences=["END", "STOP"],
messages=[{"role": "user", "content": "Generate text until you say END"}],
)
if response.stop_reason == "stop_sequence":
print(f"Stopped at sequence: {response.stop_sequence}")Claude memanggil alat dan mengharapkan Anda untuk menjalankannya.
Untuk sebagian besar implementasi penggunaan alat, kami merekomendasikan menggunakan tool runner yang secara otomatis menangani eksekusi alat, pemformatan hasil, dan manajemen percakapan.
from anthropic import Anthropic
client = Anthropic()
weather_tool = {
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City and state"},
},
"required": ["location"],
},
}
def execute_tool(name, tool_input):
"""Execute a tool and return the result."""
return f"Weather in {tool_input.get('location', 'unknown')}: 72°F"
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
tools=[weather_tool],
messages=[{"role": "user", "content": "What's the weather?"}],
)
if response.stop_reason == "tool_use":
# Extract and execute the tool
for content in response.content:
if content.type == "tool_use":
result = execute_tool(content.name, content.input)
# Return result to Claude for final responseDikembalikan ketika loop sampling sisi server mencapai batas iterasinya saat menjalankan server tools seperti pencarian web atau pengambilan web. Batas default adalah 10 iterasi per permintaan.
Ketika ini terjadi, respons mungkin berisi blok server_tool_use tanpa server_tool_result yang sesuai. Untuk membiarkan Claude menyelesaikan pemrosesan, lanjutkan percakapan dengan mengirim respons kembali apa adanya.
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=1024,
tools=[{"type": "web_search_20250305", "name": "web_search"}],
messages=[{"role": "user", "content": "Search for latest AI news"}],
)
if response.stop_reason == "pause_turn":
# Continue the conversation by sending the response back
messages = [
{"role": "user", "content": original_query},
{"role": "assistant", "content": response.content},
]
continuation = client.messages.create(
model="claude-opus-4-7",
messages=messages,
tools=[{"type": "web_search_20250305", "name": "web_search"}],
)Aplikasi Anda harus menangani pause_turn dalam loop agen apa pun yang menggunakan server tools. Cukup tambahkan respons asisten ke array pesan Anda dan buat permintaan API lain untuk membiarkan Claude melanjutkan.
Claude menolak untuk menghasilkan respons karena kekhawatiran keselamatan.
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=1024,
messages=[{"role": "user", "content": "[Unsafe request]"}],
)
if response.stop_reason == "refusal":
# Claude declined to respond
print("Claude was unable to process this request")
# Consider rephrasing or modifying the requestJika Anda sering mengalami alasan penghentian refusal saat menggunakan Claude Sonnet 4.5 atau Opus 4.1, Anda dapat mencoba memperbarui panggilan API Anda untuk menggunakan Haiku 4.5 (claude-haiku-4-5-20251001), yang memiliki batasan penggunaan yang berbeda. Pelajari lebih lanjut tentang memahami filter keamanan API Sonnet 4.5.
Untuk mempelajari lebih lanjut tentang penolakan yang dipicu oleh filter keamanan API untuk Claude Sonnet 4.5, lihat Memahami Filter Keamanan API Sonnet 4.5.
Claude berhenti karena mencapai batas jendela konteks model. Ini memungkinkan Anda untuk meminta token maksimal yang mungkin tanpa mengetahui ukuran input yang tepat.
# Request with maximum tokens to get as much as possible
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=64000, # Practical non-streaming ceiling (Opus 4.7 supports 128K with streaming)
messages=[
{"role": "user", "content": "Large input that uses most of context window..."}
],
)
if response.stop_reason == "model_context_window_exceeded":
# Response hit context window limit before max_tokens
print("Response reached model's context window limit")
# The response is still valid but was limited by context windowAlasan penghentian ini tersedia secara default di Sonnet 4.5 dan model yang lebih baru. Untuk model sebelumnya, gunakan header beta model-context-window-exceeded-2025-08-26 untuk mengaktifkan perilaku ini.
Jadikan kebiasaan untuk memeriksa stop_reason dalam logika penanganan respons Anda:
def handle_response(response):
if response.stop_reason == "tool_use":
return handle_tool_use(response)
elif response.stop_reason == "max_tokens":
return handle_truncation(response)
elif response.stop_reason == "model_context_window_exceeded":
return handle_context_limit(response)
elif response.stop_reason == "pause_turn":
return handle_pause(response)
elif response.stop_reason == "refusal":
return handle_refusal(response)
else:
# Handle end_turn and other cases
return response.content[0].textKetika respons terpotong karena batas token atau jendela konteks:
def handle_truncated_response(response):
if response.stop_reason in ["max_tokens", "model_context_window_exceeded"]:
# Option 1: Warn the user about the specific limit
if response.stop_reason == "max_tokens":
message = "[Response truncated due to max_tokens limit]"
else:
message = "[Response truncated due to context window limit]"
return f"{response.content[0].text}\n\n{message}"
# Option 2: Continue generation
messages = [
{"role": "user", "content": original_prompt},
{"role": "assistant", "content": response.content[0].text},
]
continuation = client.messages.create(
model="claude-opus-4-7",
max_tokens=1024,
messages=messages + [{"role": "user", "content": "Please continue"}],
)
return response.content[0].text + continuation.content[0].textSaat menggunakan server tools, API mungkin mengembalikan pause_turn jika loop sampling sisi server mencapai batas iterasinya (default 10). Tangani ini dengan melanjutkan percakapan:
def handle_server_tool_conversation(client, user_query, tools, max_continuations=5):
"""
Handle server tool conversations that may require multiple continuations.
The server runs a sampling loop when executing server tools. If the loop
reaches its iteration limit, the API returns pause_turn. Continue the
conversation by sending the response back to let Claude finish.
"""
messages = [{"role": "user", "content": user_query}]
for _ in range(max_continuations):
response = client.messages.create(
model="claude-opus-4-7", messages=messages, tools=tools
)
if response.stop_reason != "pause_turn":
# Claude finished processing - return the final response
return response
# pause_turn: replace the full message list to maintain alternating roles
messages = [
{"role": "user", "content": user_query},
{"role": "assistant", "content": response.content},
]
# Reached max continuations - return the last response
return responsePenting untuk membedakan antara nilai stop_reason dan kesalahan aktual:
import anthropic
from anthropic import Anthropic
client = Anthropic()
try:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
)
# Handle successful response with stop_reason
if response.stop_reason == "max_tokens":
print("Response was truncated")
except anthropic.APIError as e:
# Handle actual errors
if e.status_code == 429:
print("Rate limit exceeded")
elif e.status_code == 500:
print("Server error")Saat menggunakan streaming, stop_reason adalah:
null dalam acara message_start awalmessage_deltafrom anthropic import Anthropic
client = Anthropic()
with client.messages.stream(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
) as stream:
for event in stream:
if event.type == "message_delta":
stop_reason = event.delta.stop_reason
if stop_reason:
print(f"Stream ended with: {stop_reason}")Lebih sederhana dengan tool runner: Contoh di bawah menunjukkan penanganan alat manual. Untuk sebagian besar kasus penggunaan, tool runner secara otomatis menangani eksekusi alat dengan jauh lebih sedikit kode.
def complete_tool_workflow(client, user_query, tools):
messages = [{"role": "user", "content": user_query}]
while True:
response = client.messages.create(
model="claude-opus-4-7", messages=messages, tools=tools
)
if response.stop_reason == "tool_use":
# Execute tools and continue
tool_results = execute_tools(response.content)
messages.append({"role": "assistant", "content": response.content})
messages.append({"role": "user", "content": tool_results})
else:
# Final response
return responsedef get_complete_response(client, prompt, max_attempts=3):
messages = [{"role": "user", "content": prompt}]
full_response = ""
for _ in range(max_attempts):
response = client.messages.create(
model="claude-opus-4-7", messages=messages, max_tokens=4096
)
full_response += response.content[0].text
if response.stop_reason != "max_tokens":
break
# Continue from where it left off
messages = [
{"role": "user", "content": prompt},
{"role": "assistant", "content": full_response},
{"role": "user", "content": "Please continue from where you left off."},
]
return full_responseDengan alasan penghentian model_context_window_exceeded, Anda dapat meminta token maksimal yang mungkin tanpa menghitung jumlah token input:
def get_max_possible_tokens(client, prompt):
"""
Get as many tokens as possible within the model's context window
without needing to calculate input token count
"""
response = client.messages.create(
model="claude-opus-4-7",
messages=[{"role": "user", "content": prompt}],
max_tokens=64000, # Practical non-streaming ceiling (Opus 4.7 supports 128K with streaming)
)
if response.stop_reason == "model_context_window_exceeded":
# Got the maximum possible tokens given input size
print(
f"Generated {response.usage.output_tokens} tokens (context limit reached)"
)
elif response.stop_reason == "max_tokens":
# Got exactly the requested tokens
print(f"Generated {response.usage.output_tokens} tokens (max_tokens reached)")
else:
# Natural completion
print(f"Generated {response.usage.output_tokens} tokens (natural completion)")
return response.content[0].textDengan menangani nilai stop_reason dengan benar, Anda dapat membangun aplikasi yang lebih robust yang menangani skenario respons yang berbeda dengan baik dan memberikan pengalaman pengguna yang lebih baik.
# Check if response was truncated during tool use
if response.stop_reason == "max_tokens":
# Check if the last content block is an incomplete tool_use
last_block = response.content[-1]
if last_block.type == "tool_use":
# Send the request with higher max_tokens
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=4096, # Increased limit
messages=messages,
tools=tools,
)