Loading...
  • Messages
  • Managed Agents
  • Admin
Search...
⌘K
Use cases
OverviewTicket routingCustomer support agentContent moderationLegal summarization
Prompt engineering
OverviewPrompting best practicesConsole prompting tools
Test and evaluate
Define success and build evaluationsUsing the Evaluation Tool in ConsoleReducing latency
Strengthen guardrails
Reduce hallucinationsIncrease output consistencyMitigate jailbreaksReduce prompt leak
Reference
Glossary
Log in
Overview
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Solutions

  • AI agents
  • Code modernization
  • Coding
  • Customer support
  • Education
  • Financial services
  • Government
  • Life sciences

Partners

  • Amazon Bedrock
  • Google Cloud's Vertex AI

Learn

  • Blog
  • Courses
  • Use cases
  • Connectors
  • Customer stories
  • Engineering at Anthropic
  • Events
  • Powered by Claude
  • Service partners
  • Startups program

Company

  • Anthropic
  • Careers
  • Economic Futures
  • Research
  • News
  • Responsible Scaling Policy
  • Security and compliance
  • Transparency

Learn

  • Blog
  • Courses
  • Use cases
  • Connectors
  • Customer stories
  • Engineering at Anthropic
  • Events
  • Powered by Claude
  • Service partners
  • Startups program

Help and security

  • Availability
  • Status
  • Support
  • Discord

Terms and policies

  • Privacy policy
  • Responsible disclosure policy
  • Terms of service: Commercial
  • Terms of service: Consumer
  • Usage policy
Best practices/Prompt engineering

Prompt engineering overview

Before prompt engineering

This guide assumes that you have:

  1. A clear definition of the success criteria for your use case
  2. Some ways to empirically test against those criteria
  3. A first draft prompt you want to improve

If not, we highly suggest you spend time establishing that first. Check out Define success criteria and build evaluations for tips and guidance.

Prompt generator

Don't have a first draft prompt? Try the prompt generator in the Claude Console!

Prompting best practices

For model-specific tuning guidance for Claude's latest models, start here.


When to prompt engineer

This guide focuses on success criteria that are controllable through prompt engineering. Not every success criteria or failing eval is best solved by prompt engineering. For example, latency and cost can be sometimes more easily improved by selecting a different model.


How to prompt engineer

All prompting techniques — from clarity and examples to XML structuring, role prompting, thinking, and prompt chaining — are covered in Prompting best practices. That's the living reference; start there.

The Claude Console also offers prompting tools—prompt generator, templates and variables, and prompt improver—to help you build and refine prompts quickly.


Prompt engineering tutorial

If you're an interactive learner, you can dive into our interactive tutorials instead!

GitHub prompting tutorial

An example-filled tutorial that covers the prompt engineering concepts found in our docs.

Google Sheets prompting tutorial

A lighter weight version of our prompt engineering tutorial via an interactive spreadsheet.

Was this page helpful?

  • Before prompt engineering
  • When to prompt engineer
  • How to prompt engineer
  • Prompt engineering tutorial