Loading...
    • Developer Guide
    • API Reference
    • MCP
    • Resources
    • Release Notes
    Search...
    ⌘K
    First steps
    Intro to ClaudeQuickstart
    Models & pricing
    Models overviewChoosing a modelWhat's new in Claude 4.5Migrating to Claude 4.5Model deprecationsPricing
    Build with Claude
    Features overviewUsing the Messages APIContext windowsPrompting best practices
    Capabilities
    Prompt cachingContext editingExtended thinkingEffortStreaming MessagesBatch processingCitationsMultilingual supportToken countingEmbeddingsVisionPDF supportFiles APISearch resultsStructured outputs
    Tools
    OverviewHow to implement tool useFine-grained tool streamingBash toolCode execution toolProgrammatic tool callingComputer use toolText editor toolWeb fetch toolWeb search toolMemory toolTool search tool
    Agent Skills
    OverviewQuickstartBest practicesUsing Skills with the API
    Agent SDK
    OverviewQuickstartTypeScript SDKTypeScript V2 (preview)Python SDKMigration Guide
    MCP in the API
    MCP connectorRemote MCP servers
    Claude on 3rd-party platforms
    Amazon BedrockMicrosoft FoundryVertex AI
    Prompt engineering
    OverviewPrompt generatorUse prompt templatesPrompt improverBe clear and directUse examples (multishot prompting)Let Claude think (CoT)Use XML tagsGive Claude a role (system prompts)Prefill Claude's responseChain complex promptsLong context tipsExtended thinking tips
    Test & evaluate
    Define success criteriaDevelop test casesUsing the Evaluation ToolReducing latency
    Strengthen guardrails
    Reduce hallucinationsIncrease output consistencyMitigate jailbreaksStreaming refusalsReduce prompt leakKeep Claude in character
    Administration and monitoring
    Admin API overviewWorkspacesUsage and Cost APIClaude Code Analytics API
    Console
    Log in
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...
    Loading...

    Solutions

    • AI agents
    • Code modernization
    • Coding
    • Customer support
    • Education
    • Financial services
    • Government
    • Life sciences

    Partners

    • Amazon Bedrock
    • Google Cloud's Vertex AI

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Company

    • Anthropic
    • Careers
    • Economic Futures
    • Research
    • News
    • Responsible Scaling Policy
    • Security and compliance
    • Transparency

    Learn

    • Blog
    • Catalog
    • Courses
    • Use cases
    • Connectors
    • Customer stories
    • Engineering at Anthropic
    • Events
    • Powered by Claude
    • Service partners
    • Startups program

    Help and security

    • Availability
    • Status
    • Support
    • Discord

    Terms and policies

    • Privacy policy
    • Responsible disclosure policy
    • Terms of service: Commercial
    • Terms of service: Consumer
    • Usage policy
    Documentation

    Classification

    Claude excels at processing, understanding, and recognizing patterns in text, images, and data. These capabilities make Claude especially powerful for classification tasks.

    This guide walks through the process of determining the best approach for building a classifier with Claude and the essentials of end-to-end deployment for a Claude classifier, from use case exploration to back-end integration.

    Visit our classification cookbooks to see example classification implementations using Claude.

    When to use Claude for classification

    When should you consider using an LLM instead of a traditional ML approach for your classification tasks? Here are some key indicators:

    1. Rule-based classes: Use Claude when classes are defined by conditions rather than examples, as it can understand underlying rules.
    2. Evolving classes: Claude adapts well to new or changing domains with emerging classes and shifting boundaries.
    3. Unstructured inputs: Claude can handle large volumes of unstructured text inputs of varying lengths.
    4. Limited labeled examples: With few-shot learning capabilities, Claude learns accurately from limited labeled training data.
    5. Reasoning Requirements: Claude excels at classification tasks requiring semantic understanding, context, and higher-level reasoning.

    Establish your classification use case

    Below is a non-exhaustive list of common classification use cases where Claude excels by industry.


    Implement Claude for classification

    The three key model decision factors are: intelligence, latency, and price.

    For classification, a smaller model like Claude Haiku 3 is typically ideal due to its speed and efficiency. Though, for classification tasks where specialized knowledge or complex reasoning is required, Sonnet or Opus may be a better choice. Learn more about how Opus, Sonnet, and Haiku compare here.

    Use evaluations to gauge whether a Claude model is performing well enough to launch into production.

    1. Build a strong input prompt

    While Claude offers high-level baseline performance out of the box, a strong input prompt helps get the best results.

    For a generic classifier that you can adapt to your specific use case, copy the starter prompt below:

    We also provide a wide range of prompts to get you started in our prompt library, including prompts for a number of classification use cases, including:

    Sentiment Analysis

    Detect the tone and sentiment behind tweets. Understand user emotions, opinions, and reactions in real-time.

    Customer Review Classification

    Categorize feedback into pre-specified tags. Streamline product insights and customer service responses.

    2. Develop your test cases

    To run your classification evaluation, you will need test cases to run it on. Take a look at our guide to developing test cases.

    3. Run your eval

    Evaluation metrics

    Some success metrics to consider evaluating Claude’s performance on a classification task include:

    CriteriaDescription
    AccuracyThe model's output exactly matches the golden answer or correctly classifies the input according to the task's requirements. This is typically calculated as (Number of correct predictions) / (Overall number of predictions).
    F1 ScoreThe model's output optimally balances precision and recall.
    ConsistencyThe model's output is consistent with its predictions for similar inputs or follows a logical pattern.
    StructureThe model's output follows the expected format or structure, making it easy to parse and interpret. For example, many classifiers are expected to output JSON format.
    SpeedThe model provides a response within the acceptable time limit or latency threshold for the task.
    Bias and FairnessIf classifying data about people, is it important that the model does not demonstrate any biases based on gender, ethnicity, or other characteristics that would lead to its misclassification.

    Deploy your classifier

    To see code examples of how to use Claude for classification, check out the Classification Guide in the Claude Cookbook.

    • When to use Claude for classification
    • Establish your classification use case
    • Implement Claude for classification
    • 1. Build a strong input prompt
    • 2. Develop your test cases
    • 3. Run your eval
    • Deploy your classifier