Skip to main content
POST
/
projects
/
{projectId}
/
metrics
JavaScript
import Scorecard from 'scorecard-ai';

const client = new Scorecard({
  apiKey: 'My API Key',
});

const metric = await client.metrics.create('314', {
  evalType: 'ai',
  name: 'Response Accuracy',
  outputType: 'boolean',
  promptTemplate: 'Please evaluate if the following response is factually accurate: {{outputs.response}}',
  description: 'Evaluates if the response is factually accurate',
  evalModelName: 'gpt-4o',
  guidelines: 'Check if the response contains factually correct information',
  temperature: 0.1,
});

console.log(metric);
{
  "id": "456",
  "name": "Response Accuracy",
  "description": "Evaluates if the response is factually accurate",
  "outputType": "boolean",
  "evalType": "ai",
  "guidelines": "Check if the response contains factually correct information",
  "promptTemplate": "Please evaluate if the following response is factually accurate: {{response}}",
  "evalModelName": "gpt-4o",
  "temperature": 0.1
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

projectId
string
required

The ID of the Project to create the Metric in.

Example:

"314"

Body

application/json
  • AI int metric
  • Human int metric
  • Heuristic int metric
  • AI float metric
  • Human float metric
  • Heuristic float metric
  • AI boolean metric
  • Human boolean metric
  • Heuristic boolean metric

A Metric with AI evaluation and integer output.

name
string
required

The name of the Metric.

evalType
string
required

AI-based evaluation type.

Allowed value: "ai"
promptTemplate
string
required

The complete prompt template for AI evaluation. Should include placeholders for dynamic content.

outputType
string
required

Integer output type.

Allowed value: "int"
description
string | null

The description of the Metric.

guidelines
string

Guidelines for AI evaluation on how to score the metric.

evalModelName
string
default:gpt-4o

The AI model to use for evaluation.

temperature
number
default:0

The temperature for AI evaluation (0-2).

Required range: 0 <= x <= 2
passingThreshold
integer
default:4

The threshold for determining pass/fail from integer scores (1-5).

Required range: 1 <= x <= 5

Response

Metric created successfully

  • AI int metric
  • Human int metric
  • Heuristic int metric
  • AI float metric
  • Human float metric
  • Heuristic float metric
  • AI boolean metric
  • Human boolean metric
  • Heuristic boolean metric

A Metric defines how to evaluate system outputs against expected results. A Metric with AI evaluation and integer output.

id
string
required

The ID of the Metric.

name
string
required

The name of the Metric.

description
string | null
required

The description of the Metric.

evalType
string
required

AI-based evaluation type.

Allowed value: "ai"
guidelines
string
required

Guidelines for AI evaluation on how to score the metric.

promptTemplate
string
required

The complete prompt template for AI evaluation. Should include placeholders for dynamic content.

evalModelName
string
default:gpt-4o
required

The AI model to use for evaluation.

temperature
number
default:0
required

The temperature for AI evaluation (0-2).

Required range: 0 <= x <= 2
outputType
string
required

Integer output type.

Allowed value: "int"
passingThreshold
integer
default:4
required

The threshold for determining pass/fail from integer scores (1-5).

Required range: 1 <= x <= 5
I