Prompts in Scorecard are configurable templates that generate inputs for language models. They support dynamic variable substitution, multi-message conversations, and integrated testing workflows.
Screenshot of viewing prompt details in the UI.Screenshot of viewing prompt details in the UI.

Prompt management page for a "docs search" LLM system.

Create a Prompt

Go to the Prompts page in Scorecard and click the “New Prompt +” button. Provide a name and description for your prompt, then choose your AI model and configure parameters:
  • Model: Select from available models (e.g. gpt-4.1-mini, opus-4).
  • Temperature: Control randomness (0.0 = deterministic, 1.0 = creative).
  • Max Tokens: Set maximum response length.
  • Top-P: Fine-tune token selection probability.
Then, write your prompt content using the built-in editor with Jinja template support.
Screenshot of creating a prompt in the UI.Screenshot of creating a prompt in the UI.

Prompt creation dialog with model settings.

Prompt Templates & Jinja Syntax

Template Structure

Scorecard prompts support multi-message conversations with different roles:
  • System: Instructions for the AI model’s behavior
  • User: The main prompt or question
  • Assistant: Previous responses (for conversation context)

Variable substitution

Scorecard supports Jinja template syntax for variable substitution. This lets you insert dynamic content from your testcases into your prompt by wrapping the Testcase’s input’s field name in {{ and }}. For example, if your Testcase has a customer_message field, your prompt might be:
You are a helpful customer service assistant.

Customer inquiry: {{customer_message}}
Product: {{product_name}}
Customer tier: {{customer_tier}}

Please provide a helpful response addressing their concern.
Scorecard also has a special {{allInputs}} variable that contains all the testcase input fields as a formatted string.

Version Management

The prompt details page shows a version history of the prompt. You can create a new version of the prompt by modifying the existing prompt or parameters and clicking “Save Changes”. Selecting “Publish this version to production” will mark that version as the “production” version. Otherwise, it’ll simply be an experimental version. You can mark any version as production by clicking the “Publish” button, but exactly one version is marked as production at any time.
Screenshot of the prompt dropdown in the UI.Screenshot of the prompt dropdown in the UI.

Prompt version dropdown with publish button.

Before publishing to production, test a new version in the playground to be confident that it behaves as expected and won’t regress on any testcases.
Ready to evaluate your prompts? Follow the Playground guide to test your prompts on Testsets.