This quickstart shows how to use OpenLLMetry to automatically instrument and trace LLM calls for Scorecard monitoring. OpenLLMetry provides zero-code instrumentation for popular LLM libraries and structured tracing with workflows and tasks. You can also check out our complete Node.js OpenLLMetry example for a full working implementation.
Prefer a hands-on demo? Skip the manual setup and launch this quickstart in Google Colab. Run in Google Colab →

Steps

1

Setup accounts

Create a Scorecard account, then get your tracing credentials:
  1. Visit your Settings
  2. Copy your Scorecard API Key
  3. Set your environment variables:
# Replace whitespaces with `%20` in the header value to comply with OpenTelemetry Protocol Exporter specification

# For Scorecard tracing
os.environ['TRACELOOP_BASE_URL'] = "https://tracing.scorecard.io/otel"
os.environ['TRACELOOP_HEADERS'] = "Authorization=Bearer%20<YOUR_SCORECARD_KEY>"

# For OpenAI (if using)
os.environ['OPENAI_API_KEY'] = "<OPENAI_API_KEY>"
2

Install OpenLLMetry SDK

Install OpenLLMetry and your LLM library:
pip install traceloop-sdk openai
This example uses OpenAI. Scorecard works with all major providers (OpenAI, Anthropic/Claude, Google Gemini, Groq, AWS Bedrock, and more).
3

Initialize OpenLLMetry

Set up OpenLLMetry to automatically trace your LLM calls:
from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow, task
from traceloop.sdk.instruments import Instruments
from openai import OpenAI

# Initialize OpenAI client
openai_client = OpenAI()

# Initialize OpenLLMetry (reads config from environment variables)
Traceloop.init(disable_batch=True, instruments={Instruments.OPENAI})
4

Create and run a simple traced workflow

Create a simple workflow that will be automatically traced. Here’s a minimal example:
@workflow(name="simple_chat")
def simple_workflow():
    completion = openai_client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Tell me a joke"}]
    )
    return completion.choices[0].message.content

# Run the workflow - all LLM calls will be automatically traced
simple_workflow()
print("Check Scorecard for traces!")
5

View traces in Scorecard

After running your application, view the traces in your Scorecard account:
  1. Visit app.scorecard.io
  2. Navigate to your project → Traces section
  3. Explore your traced workflows

What You’ll See

  • Workflow spans: High-level operations (simple_chat)
  • LLM spans: Automatic OpenAI API call instrumentation
  • Timing data: Duration of each operation
  • Token usage: Input/output tokens for LLM calls
  • Model information: Which models were used
  • Comprehensive data: All trace information visible in your Scorecard account
Screenshot of viewing traces in the Scorecard UI.Screenshot of viewing traces in the Scorecard UI.

Viewing traces in the Scorecard UI.

Key Benefits

  • Zero-code instrumentation: LLM calls are automatically traced
  • Structured observability: Organize traces with workflows and tasks
  • Performance monitoring: Track latency, token usage, and costs
  • User feedback integration: Connect user satisfaction to specific traces
  • Production debugging: Understand exactly what happened in failed requests

Learn More