Skip to main content
PATCH
/
metrics
/
{metricId}
JavaScript
import Scorecard from 'scorecard-ai';

const client = new Scorecard({
  apiKey: 'My API Key',
});

const metric = await client.metrics.update('321', {
  evalType: 'ai',
  outputType: 'boolean',
  promptTemplate:
    'Using the following guidelines, evaluate the response: {{ guidelines }}\n\nResponse: {{ outputs.response }}\n\nIdeal answer: {{ expected.idealResponse }}',
});

console.log(metric);
{
  "id": "321",
  "name": "Response Accuracy",
  "description": "Evaluates if the response is factually accurate",
  "outputType": "boolean",
  "evalType": "ai",
  "evalModelName": "gpt-4o",
  "guidelines": "Check if the response contains factually correct information",
  "promptTemplate": "Using the following guidelines, evaluate the response: {{ guidelines }}\n\nResponse: {{ outputs.response }}\n\nIdeal answer: {{ expected.idealResponse }}",
  "temperature": 0.1
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

metricId
string
required

The ID of the Metric to update.

Example:

"321"

Body

application/json
  • AI int metric
  • Human int metric
  • Heuristic int metric
  • AI float metric
  • Human float metric
  • Heuristic float metric
  • AI boolean metric
  • Human boolean metric
  • Heuristic boolean metric

A Metric with AI evaluation and integer output.

evalType
string
required

AI-based evaluation type.

Allowed value: "ai"
outputType
string
required

Integer output type.

Allowed value: "int"
name
string

The name of the Metric.

description
string | null

The description of the Metric.

guidelines
string

Guidelines for AI evaluation on how to score the metric.

promptTemplate
string

The complete prompt template for AI evaluation. Should include placeholders for dynamic content.

evalModelName
string
default:gpt-4o

The AI model to use for evaluation.

temperature
number
default:0

The temperature for AI evaluation (0-2).

Required range: 0 <= x <= 2
passingThreshold
integer
default:4

The threshold for determining pass/fail from integer scores (1-5).

Required range: 1 <= x <= 5

Response

Metric updated successfully

  • AI int metric
  • Human int metric
  • Heuristic int metric
  • AI float metric
  • Human float metric
  • Heuristic float metric
  • AI boolean metric
  • Human boolean metric
  • Heuristic boolean metric

A Metric defines how to evaluate system outputs against expected results. A Metric with AI evaluation and integer output.

id
string
required

The ID of the Metric.

name
string
required

The name of the Metric.

description
string | null
required

The description of the Metric.

evalType
string
required

AI-based evaluation type.

Allowed value: "ai"
guidelines
string
required

Guidelines for AI evaluation on how to score the metric.

promptTemplate
string
required

The complete prompt template for AI evaluation. Should include placeholders for dynamic content.

evalModelName
string
default:gpt-4o
required

The AI model to use for evaluation.

temperature
number
default:0
required

The temperature for AI evaluation (0-2).

Required range: 0 <= x <= 2
outputType
string
required

Integer output type.

Allowed value: "int"
passingThreshold
integer
default:4
required

The threshold for determining pass/fail from integer scores (1-5).

Required range: 1 <= x <= 5