Use Testsets to create curated datasets for evaluating your AI agents
A Testset is a collection of Testcases used to evaluate the performance of an AI agent across various inputs and scenarios. Think of it as a curated dataset specifically designed for testing AI agents. Testsets belong to a central theme like “Core Functionality”, “Edge Cases”, or “Customer Support Queries”.A Testcase is an individual test data point containing inputs, expected outputs, and metadata used for evaluation.
Testset details page showing eight Testcases for a "Message tone rewriter" AI agent.
Go to the Testsets page in your project and click the “New Testset” button to create a new, empty Testset.
Checking “Add example Testcases” will automatically generate 3 sample Testcases using AI based on your Testset’s description. This provides a starting point for your Testset.
Each Testset has a schema, which defines which fields a Testcase has, the type of each field, and the role each field plays in evaluation.
Input fields are sent to your AI agent.
Expected fields are expected or ideal outputs, which metrics compare your agent’s output to.
Metadata fields are additional context for analysis, not used by evaluation or your agent.
You can update the schema of a Testset by clicking the “Edit Schema” button in the Testset actions menu.This allows you to add or remove fields, modify field types, and update field descriptions. Existing Testcases are not modified, but are validated against the new schema.
You can also create and update Testsets with the Scorecard SDK.You define a Testset’s schema using the JSON Schema format.For example, here’s a schema for a customer support system:
Copy
Ask AI
{ "type": "object", "title": "Customer Support Schema", "properties": { "userQuery": { "type": "string", "description": "The customer's question or request" }, "context": { "type": "string", "description": "Additional context about the customer" }, "ideal": { "type": "string", "description": "The ideal response from support" }, "expectedSentiment": { "type": "string", "description": "The expected predicted sentiment of the user query." }, "difficulty": { "type": "number", "description": "How difficult the customer support request is to solve (1-10)" } }, "required": ["userQuery", "ideal"]}
Supported Data Types
string: Text content
number: Numeric values (integers or floats)
boolean: either true or false
object: Nested JSON objects
array: Lists of JSON values
You also need to define the field mapping when creating a Testset with the SDK.A field mapping categorizes schema fields by their role in evaluation.For example, here’s a field mapping for the customer support schema above:
Input fields contain the actual data that gets sent to your AI system, workflow, or agent during testing. These should match exactly what your system expects to receive in production.
Quick tip: Your input fields should match what goes INTO your AI system. Think about:
What do users type into your UI?
What data does your API receive?
What would a user or another system send to trigger your workflow?
If you’re unsure, ask an engineer on your team: “What JSON/data do we send to our AI system?”
{ "user_message": "How do I reset my password?", "conversation_history": [...], "system_prompt": "You are a helpful assistant"}
For multi-step AI workflows or agents:
Copy
Ask AI
{ "task_description": "Analyze this document and extract key points", "input_data": "Document content or reference", "workflow_parameters": { "mode": "detailed", "output_format": "bullet_points" }}
For AI-powered API endpoints:
Copy
Ask AI
{ "query": "Find similar products", "filters": {"category": "electronics"}, "max_results": 10}
For document analysis systems:
Copy
Ask AI
{ "document_content": "Full text or path to document", "extraction_query": "What are the payment terms?", "document_type": "contract"}
Your input fields should mirror your production system’s interface. If your system expects a single prompt string, use a single string field. If it expects structured JSON with multiple parameters, reflect that structure in your schema.
The following examples show complete testcase schemas with both input and expected fields. The field mapping below each schema identifies which fields are inputs vs expected outputs.
Basic Chatbot
Document Q&A System
AI Agent/Workflow
Copy
Ask AI
{ "type": "object", "properties": { "user_message": { "type": "string", "description": "What the user types in the chat box" }, "expected_response": { "type": "string", "description": "What the bot should say back" } }}
{ "type": "object", "properties": { "question": { "type": "string", "description": "The question about the document" }, "document": { "type": "string", "description": "The document text to search through" }, "expected_answer": { "type": "string", "description": "The correct answer from the document" } }}
{ "type": "object", "properties": { "task_description": { "type": "string", "description": "What you want the AI agent to do" }, "input_data": { "type": "string", "description": "Any data the agent needs to complete the task" }, "expected_output": { "type": "string", "description": "What the agent should produce" } }}
The key is matching your test inputs to your actual system. If your system takes a single text field, use a single text field. If it takes multiple parameters, include those as separate fields.
The Scorecard UI supports importing Testcases in CSV, TSV, JSON, and JSONL formats. Scorecard automatically maps your file’s columns to the testset’s schema fields and validates data.
Upsert behavior: If your file includes testcases with IDs that already exist in the testset, those testcases will be updated with the new values rather than creating duplicates. This makes it easy to bulk-update existing testcases by re-uploading a modified file.
Copy
Ask AI
userQuery,context,ideal,category"How do I cancel my order?","Order placed 1 hour ago","You can cancel orders within 2 hours...","cancellation""Where is my package?","Order shipped yesterday","Track your package using the link...","tracking"
Testset tagsYou can add custom tags to your Testsets to categorize them. For example, regression or edge-cases.Duplicate TestsetYou can create a copy of a Testset by clicking the “Duplicate” button in the Testset actions menu.This maintains original field mappings, so it’s useful for creating variants of your Testsets without having to recreate the schema.
Purpose: Iterative improvement and developmentSize: 5-20 TestcasesContent: Your favorite prompts and edge cases that matter mostUsage: Quick feedback during development cycles
Regression Testsets
Purpose: Ensure new changes don’t break existing functionalitySize: 50-100 TestcasesContent: Representative examples of core use casesUsage: Run regularly (nightly builds, CI/CD pipelines)
Launch Evaluation Testsets
Purpose: Comprehensive testing before major releasesSize: 100+ TestcasesContent: Broad coverage of all use cases and edge casesUsage: Pre-launch validation and confidence building
Must-Pass Testsets
Purpose: Critical functionality that must never failSize: Variable (focus on precision over coverage)Content: High-precision Testcases for essential featuresUsage: Early checks in deployment pipelines
Remember that testcase data may contain sensitive information. Follow your organization’s data handling policies and avoid including PII, secrets, or confidential data in Testsets.