

Prompt management page for a "docs search" LLM system.
Create a Prompt
Go to the Prompts page in Scorecard and click the “New Prompt +” button. Provide a name and description for your prompt, then choose your AI model and configure parameters:- Model: Select from available models (e.g.
gpt-4.1-mini
,opus-4
).- If you don’t see the LLM provider you want, add the provider API key in organization settings.
- Temperature: Control randomness (0.0 = deterministic, 1.0 = creative).
- Max Tokens: Set maximum response length.
- Top-P: Fine-tune token selection probability.


Prompt creation dialog with model settings.
Prompt Templates & Jinja Syntax
Template Structure
Scorecard prompts support multi-message conversations with different roles:- System: Instructions for the AI model’s behavior
- User: The main prompt or question
- Assistant: Previous responses (for conversation context)
Variable substitution
Scorecard supports Jinja template syntax for variable substitution. This lets you insert dynamic content from your testcases into your prompt by wrapping the Testcase’s input’s field name in{{
and }}
.
For example, if your Testcase has a customer_message
field, your prompt might be:
{{allInputs}}
variable that contains all the testcase input fields as a formatted string.
Advanced Jinja features
Advanced Jinja features
Scorecard supports full Jinja functionality including:
- Conditional logic:
{% if customer_tier == "premium" %} User is a premium customer. {% endif %}
- Loops:
{% for item in product_list %} {{item.name}} {% endfor %}
- Filters:
{{customer_message | upper}}
- Comments:
{# This is a comment #}
Version Management
The prompt details page shows a version history of the prompt. You can create a new version of the prompt by modifying the existing prompt or parameters and clicking “Save Changes”. Selecting “Publish this version to production” will mark that version as the “production” version. Otherwise, it’ll simply be an experimental version. You can mark any version as production by clicking the “Publish” button, but exactly one version is marked as production at any time.

Prompt version dropdown with publish button.
SDK Integration
You can programmatically manage prompts and their versions using the Scorecard SDK. Since prompts are implemented as systems in Scorecard, you’ll use the systems API methods.Creating or Updating Prompts
Use thesystems.upsert()
method to create a new prompt or update an existing one. For complete API details, see the Create (upsert) system API reference.
config
object contains your prompt’s model parameters and template:
- modelName: The LLM model to use (e.g.,
gpt-4-turbo
,claude-3-opus
) - temperature: Controls randomness (0.0-1.0)
- maxTokens: Maximum response length
- topP: Token selection probability
- promptTemplate: Array of message objects defining the conversation structure. Each message has a
role
(system
,user
, orassistant
) andcontent
with your prompt text and Jinja variables.
For a system to be flagged as a prompt in Scorecard, it must include the
promptTemplate
configuration in its config object. Without this field, the system will not be recognized as a prompt.Creating Prompt Versions
Use thesystems.versions.upsert()
method to create new versions of your prompt. For complete API details, see the Upsert system version API reference.
Ready to evaluate your prompts? Follow the Playground guide to test your prompts on Testsets.