SDK Migration Guide

Migrate your Scorecard code from the old SDK to the new SDK.

SDK v1

Find the old SDK docs on PyPi or npm.

SDK v2

Find the new SDK docs on PyPi or npm.

In SDK v1, we imported from scorecard (Python) or scorecard-ai (JavaScript).

In SDK v2, we import from scorecard_ai (Python) or scorecard-ai (JavaScript).

# Setup Scorecard
from scorecard.client import Scorecard
client = Scorecard(api_key="YOUR_API_KEY")
# Setup Scorecard
from scorecard_ai import Scorecard
client = Scorecard(api_key="YOUR_API_KEY")

In SDK v1, we supported creating a Run with a Scoring Config (a collection of Metrics).

We specified IDs as integers.

In SDK v2, Runs can only be created with a list of Metric IDs, not a Scoring Config ID.

The IDs are the same, but are now strings instead of integers.

Specifying the Project ID is now required.

# Run and evaluate
run = client.run_tests(
  input_testset_id=123
  scoring_config_id=789,
  model_invocation=lambda prompt: call_model(prompt),
)
# Run and evaluate
from scorecard_ai.lib import run_and_evaluate
run = run_and_evaluate(
    client=client,
    project_id="123",
    testset_id="456",
    metric_ids=["789"],
    system=lambda system_input: call_model(system_input["prompt"])
)

In SDK v1, Testsets were not based on schemas and required that Testcases followed the format of userQuery and ideal.

In SDK v2, Testsets are much more flexible and applicable to use cases beyond chat bots. You define the schema of a Testset.

Note that the Project ID now required.

testset = client.testset.create(
    name="Testset name",
    description="Optional Testset description",
    using_retrieval=False,
)
testset = client.testsets.create(
    project_id="1234",
    name="Testset name",
    description="Required Testset description",
    field_mapping={
        # Inputs represent the input to the AI system.
        "inputs": ["userQuery"],
        # Labels represent the expected output of the AI system.
        "labels": ["ideal"],
        # Metadata fields are used for grouping Testcases, but not seen by the AI system.
        "metadata": [],
    },
    json_schema={
        "type": "object",
        "properties": {
            # The original message.
            "userQuery": {"type": "string"},
            # The ideal model response
            "ideal": {"type": "string"},
        },
        "required": ["userQuery", "ideal"],
    },
)

In SDK v1, Testcases were created with a user_query and ideal.

In SDK v2, Testcases are now created with a jsonData field that matches the schema of the Testset.

client.testcase.create(
  testset_id=testset.id,
  user_query="What's 2+2 in English?",
  ideal="Four",
)
client.testcases.create(
  testset_id=testset.id,
  items=[
      {
          "json_data": {
              "userQuery": "What's 2+2 in English?",
              "ideal": "Four",
          },
      },
  ]
)

For any operations not listed here, refer to the SDK v2 docs.