Loading...
    • 开发者指南
    • API 参考
    • MCP
    • 资源
    • 更新日志
    Search...
    ⌘K
    入门
    Claude 简介快速开始
    模型与定价
    模型概览选择模型Claude 4.6 新特性迁移指南模型弃用定价
    使用 Claude 构建
    功能概览使用 Messages API处理停止原因提示词最佳实践
    上下文管理
    上下文窗口压缩上下文编辑
    能力
    提示缓存扩展思考自适应思考推理力度流式消息批量处理引用多语言支持Token 计数嵌入视觉PDF 支持Files API搜索结果结构化输出
    工具
    概览如何实现工具使用细粒度工具流式传输Bash 工具代码执行工具程序化工具调用计算机使用工具文本编辑器工具网页抓取工具网页搜索工具记忆工具工具搜索工具
    Agent Skills
    概览快速开始最佳实践企业级 Skills通过 API 使用 Skills
    Agent SDK
    概览快速开始TypeScript SDKTypeScript V2(预览版)Python SDK迁移指南
    API 中的 MCP
    MCP 连接器远程 MCP 服务器
    第三方平台上的 Claude
    Amazon BedrockMicrosoft FoundryVertex AI
    提示工程
    概览提示词生成器使用提示词模板提示词优化器清晰直接使用示例(多样本提示)让 Claude 思考(思维链)使用 XML 标签赋予 Claude 角色(系统提示词)链式复杂提示长上下文技巧扩展思考技巧
    测试与评估
    定义成功标准开发测试用例使用评估工具降低延迟
    加强安全护栏
    减少幻觉提高输出一致性防范越狱攻击流式拒绝减少提示词泄露保持 Claude 角色设定
    管理与监控
    Admin API 概览数据驻留工作空间用量与成本 APIClaude Code Analytics API零数据留存
    Console
    Log in
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...

    Solutions

    • AI agents
    • Code modernization
    • Coding
    • Customer support
    • Education
    • Financial services
    • Government
    • Life sciences

    Partners

    • Amazon Bedrock
    • Google Cloud's Vertex AI

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Company

    • Anthropic
    • Careers
    • Economic Futures
    • Research
    • News
    • Responsible Scaling Policy
    • Security and compliance
    • Transparency

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Help and security

    • Availability
    • Status
    • Support
    • Discord

    Terms and policies

    • Privacy policy
    • Responsible disclosure policy
    • Terms of service: Commercial
    • Terms of service: Consumer
    • Usage policy
    使用 Claude 构建

    处理停止原因

    当您向 Messages API 发送请求时,Claude 的响应中包含一个 stop_reason 字段,用于指示模型停止生成响应的原因。理解这些值对于构建能够正确处理不同响应类型的健壮应用程序至关重要。

    有关 API 响应中 stop_reason 的详细信息,请参阅 Messages API 参考。

    什么是 stop_reason?

    stop_reason 字段是每个成功的 Messages API 响应的一部分。与表示请求处理失败的错误不同,stop_reason 告诉您 Claude 为什么成功完成了其响应生成。

    Example response
    {
      "id": "msg_01234",
      "type": "message",
      "role": "assistant",
      "content": [
        {
          "type": "text",
          "text": "Here's the answer to your question..."
        }
      ],
      "stop_reason": "end_turn",
      "stop_sequence": null,
      "usage": {
        "input_tokens": 100,
        "output_tokens": 50
      }
    }

    停止原因值

    end_turn

    最常见的停止原因。表示 Claude 自然地完成了其响应。

    if response.stop_reason == "end_turn":
        # Process the complete response
        print(response.content[0].text)

    end_turn 时的空响应

    有时 Claude 会返回一个空响应(恰好 2-3 个 token,没有内容),并带有 stop_reason: "end_turn"。这通常发生在 Claude 判断助手轮次已完成时,特别是在工具结果之后。

    常见原因:

    • 在工具结果之后立即添加文本块(Claude 学会了期望用户总是在工具结果之后插入文本,因此它结束自己的轮次以遵循该模式)
    • 将 Claude 已完成的响应发送回去而不添加任何内容(Claude 已经决定完成了,所以它将保持完成状态)

    如何防止空响应:

    # INCORRECT: Adding text immediately after tool_result
    messages = [
        {"role": "user", "content": "Calculate the sum of 1234 and 5678"},
        {"role": "assistant", "content": [
            {
                "type": "tool_use",
                "id": "toolu_123",
                "name": "calculator",
                "input": {"operation": "add", "a": 1234, "b": 5678}
            }
        ]},
        {"role": "user", "content": [
            {
                "type": "tool_result",
                "tool_use_id": "toolu_123",
                "content": "6912"
            },
            {
                "type": "text",
                "text": "Here's the result"  # Don't add text after tool_result
            }
        ]}
    ]
    
    # CORRECT: Send tool results directly without additional text
    messages = [
        {"role": "user", "content": "Calculate the sum of 1234 and 5678"},
        {"role": "assistant", "content": [
            {
                "type": "tool_use",
                "id": "toolu_123",
                "name": "calculator",
                "input": {"operation": "add", "a": 1234, "b": 5678}
            }
        ]},
        {"role": "user", "content": [
            {
                "type": "tool_result",
                "tool_use_id": "toolu_123",
                "content": "6912"
            }
        ]}  # Just the tool_result, no additional text
    ]
    
    # If you still get empty responses after fixing the above:
    def handle_empty_response(client, messages):
        response = client.messages.create(
            model="claude-opus-4-6",
            max_tokens=1024,
            messages=messages
        )
    
        # Check if response is empty
        if (response.stop_reason == "end_turn" and
            not response.content:
    
            # INCORRECT: Don't just retry with the empty response
            # This won't work because Claude already decided it's done
    
            # CORRECT: Add a continuation prompt in a NEW user message
            messages.append({"role": "user", "content": "Please continue"})
    
            response = client.messages.create(
                model="claude-opus-4-6",
                max_tokens=1024,
                messages=messages
            )
    
        return response

    最佳实践:

    1. 永远不要在工具结果之后立即添加文本块 - 这会教导 Claude 在每次工具使用后期望用户输入
    2. 不要在不修改的情况下重试空响应 - 简单地将空响应发送回去不会有帮助
    3. 将继续提示作为最后手段 - 仅在上述修复未能解决问题时使用

    max_tokens

    Claude 停止是因为达到了您在请求中指定的 max_tokens 限制。

    # Request with limited tokens
    response = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=10,
        messages=[{"role": "user", "content": "Explain quantum physics"}]
    )
    
    if response.stop_reason == "max_tokens":
        # Response was truncated
        print("Response was cut off at token limit")
        # Consider making another request to continue

    stop_sequence

    Claude 遇到了您的自定义停止序列之一。

    response = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=1024,
        stop_sequences=["END", "STOP"],
        messages=[{"role": "user", "content": "Generate text until you say END"}]
    )
    
    if response.stop_reason == "stop_sequence":
        print(f"Stopped at sequence: {response.stop_sequence}")

    tool_use

    Claude 正在调用工具并期望您执行它。

    对于大多数工具使用实现,我们建议使用 tool runner,它会自动处理工具执行、结果格式化和对话管理。

    response = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=1024,
        tools=[weather_tool],
        messages=[{"role": "user", "content": "What's the weather?"}]
    )
    
    if response.stop_reason == "tool_use":
        # Extract and execute the tool
        for content in response.content:
            if content.type == "tool_use":
                result = execute_tool(content.name, content.input)
                # Return result to Claude for final response

    pause_turn

    当 Claude 需要暂停长时间运行的操作时,与网络搜索等服务器工具一起使用。

    response = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=1024,
        tools=[{"type": "web_search_20250305", "name": "web_search"}],
        messages=[{"role": "user", "content": "Search for latest AI news"}]
    )
    
    if response.stop_reason == "pause_turn":
        # Continue the conversation
        messages = [
            {"role": "user", "content": original_query},
            {"role": "assistant", "content": response.content}
        ]
        continuation = client.messages.create(
            model="claude-opus-4-6",
            messages=messages,
            tools=[{"type": "web_search_20250305", "name": "web_search"}]
        )

    refusal

    Claude 由于安全考虑拒绝生成响应。

    response = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=1024,
        messages=[{"role": "user", "content": "[Unsafe request]"}]
    )
    
    if response.stop_reason == "refusal":
        # Claude declined to respond
        print("Claude was unable to process this request")
        # Consider rephrasing or modifying the request

    如果您在使用 Claude Sonnet 4.5 或 Opus 4.1 时频繁遇到 refusal 停止原因,可以尝试将 API 调用更新为使用 Sonnet 4(claude-sonnet-4-20250514),它具有不同的使用限制。了解更多关于理解 Sonnet 4.5 的 API 安全过滤器的信息。

    要了解更多关于 Claude Sonnet 4.5 的 API 安全过滤器触发的拒绝,请参阅理解 Sonnet 4.5 的 API 安全过滤器。

    model_context_window_exceeded

    Claude 停止是因为达到了模型的上下文窗口限制。这允许您在不知道确切输入大小的情况下请求最大可能的 token 数。

    # Request with maximum tokens to get as much as possible
    response = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=64000,  # Model's maximum output tokens
        messages=[{"role": "user", "content": "Large input that uses most of context window..."}]
    )
    
    if response.stop_reason == "model_context_window_exceeded":
        # Response hit context window limit before max_tokens
        print("Response reached model's context window limit")
        # The response is still valid but was limited by context window

    此停止原因在 Sonnet 4.5 及更新模型中默认可用。对于早期模型,请使用 beta 头 model-context-window-exceeded-2025-08-26 来启用此行为。

    处理停止原因的最佳实践

    1. 始终检查 stop_reason

    养成在响应处理逻辑中检查 stop_reason 的习惯:

    def handle_response(response):
        if response.stop_reason == "tool_use":
            return handle_tool_use(response)
        elif response.stop_reason == "max_tokens":
            return handle_truncation(response)
        elif response.stop_reason == "model_context_window_exceeded":
            return handle_context_limit(response)
        elif response.stop_reason == "pause_turn":
            return handle_pause(response)
        elif response.stop_reason == "refusal":
            return handle_refusal(response)
        else:
            # Handle end_turn and other cases
            return response.content[0].text

    2. 优雅地处理截断响应

    当响应由于 token 限制或上下文窗口而被截断时:

    def handle_truncated_response(response):
        if response.stop_reason in ["max_tokens", "model_context_window_exceeded"]:
            # Option 1: Warn the user about the specific limit
            if response.stop_reason == "max_tokens":
                message = "[Response truncated due to max_tokens limit]"
            else:
                message = "[Response truncated due to context window limit]"
            return f"{response.content[0].text}\n\n{message}"
    
            # Option 2: Continue generation
            messages = [
                {"role": "user", "content": original_prompt},
                {"role": "assistant", "content": response.content[0].text}
            ]
            continuation = client.messages.create(
                model="claude-opus-4-6",
                max_tokens=1024,
                messages=messages + [{"role": "user", "content": "Please continue"}]
            )
            return response.content[0].text + continuation.content[0].text

    3. 为 pause_turn 实现重试逻辑

    对于可能暂停的服务器工具:

    def handle_paused_conversation(initial_response, max_retries=3):
        response = initial_response
        messages = [{"role": "user", "content": original_query}]
        
        for attempt in range(max_retries):
            if response.stop_reason != "pause_turn":
                break
                
            messages.append({"role": "assistant", "content": response.content})
            response = client.messages.create(
                model="claude-opus-4-6",
                messages=messages,
                tools=original_tools
            )
        
        return response

    停止原因与错误

    区分 stop_reason 值和实际错误非常重要:

    停止原因(成功的响应)

    • 响应体的一部分
    • 指示生成正常停止的原因
    • 响应包含有效内容

    错误(失败的请求)

    • HTTP 状态码 4xx 或 5xx
    • 指示请求处理失败
    • 响应包含错误详情
    try:
        response = client.messages.create(...)
        
        # Handle successful response with stop_reason
        if response.stop_reason == "max_tokens":
            print("Response was truncated")
        
    except anthropic.APIError as e:
        # Handle actual errors
        if e.status_code == 429:
            print("Rate limit exceeded")
        elif e.status_code == 500:
            print("Server error")

    流式传输注意事项

    使用流式传输时,stop_reason 的行为如下:

    • 在初始 message_start 事件中为 null
    • 在 message_delta 事件中提供
    • 在其他任何事件中不提供
    with client.messages.stream(...) as stream:
        for event in stream:
            if event.type == "message_delta":
                stop_reason = event.delta.stop_reason
                if stop_reason:
                    print(f"Stream ended with: {stop_reason}")

    常见模式

    处理工具使用工作流

    使用 tool runner 更简单:下面的示例展示了手动工具处理。对于大多数用例,tool runner 可以用更少的代码自动处理工具执行。

    def complete_tool_workflow(client, user_query, tools):
        messages = [{"role": "user", "content": user_query}]
    
        while True:
            response = client.messages.create(
                model="claude-opus-4-6",
                messages=messages,
                tools=tools
            )
    
            if response.stop_reason == "tool_use":
                # Execute tools and continue
                tool_results = execute_tools(response.content)
                messages.append({"role": "assistant", "content": response.content})
                messages.append({"role": "user", "content": tool_results})
            else:
                # Final response
                return response

    确保完整响应

    def get_complete_response(client, prompt, max_attempts=3):
        messages = [{"role": "user", "content": prompt}]
        full_response = ""
    
        for _ in range(max_attempts):
            response = client.messages.create(
                model="claude-opus-4-6",
                messages=messages,
                max_tokens=4096
            )
    
            full_response += response.content[0].text
    
            if response.stop_reason != "max_tokens":
                break
    
            # Continue from where it left off
            messages = [
                {"role": "user", "content": prompt},
                {"role": "assistant", "content": full_response},
                {"role": "user", "content": "Please continue from where you left off."}
            ]
    
        return full_response

    在不知道输入大小的情况下获取最大 token 数

    使用 model_context_window_exceeded 停止原因,您可以在不计算输入大小的情况下请求最大可能的 token 数:

    def get_max_possible_tokens(client, prompt):
        """
        Get as many tokens as possible within the model's context window
        without needing to calculate input token count
        """
        response = client.messages.create(
            model="claude-opus-4-6",
            messages=[{"role": "user", "content": prompt}],
            max_tokens=64000  # Set to model's maximum output tokens
        )
    
        if response.stop_reason == "model_context_window_exceeded":
            # Got the maximum possible tokens given input size
            print(f"Generated {response.usage.output_tokens} tokens (context limit reached)")
        elif response.stop_reason == "max_tokens":
            # Got exactly the requested tokens
            print(f"Generated {response.usage.output_tokens} tokens (max_tokens reached)")
        else:
            # Natural completion
            print(f"Generated {response.usage.output_tokens} tokens (natural completion)")
    
        return response.content[0].text

    通过正确处理 stop_reason 值,您可以构建更健壮的应用程序,优雅地处理不同的响应场景并提供更好的用户体验。

    Was this page helpful?

    • 什么是 stop_reason?
    • end_turn
    • max_tokens
    • stop_sequence
    • tool_use
    • pause_turn
    • refusal
    • model_context_window_exceeded
    • 1. 始终检查 stop_reason
    • 2. 优雅地处理截断响应
    • 3. 为 pause_turn 实现重试逻辑
    • 在不知道输入大小的情况下获取最大 token 数