Skip to main content
This quickstart shows how to instrument your LangChain application using OpenLLMetry and Scorecard for observability, debugging, and evaluation.
Looking for general tracing guidance? Check out the Tracing Quickstart for an overview of tracing concepts and alternative integration methods.

Steps

1

Install dependencies

Install the Traceloop SDK and the LangChain instrumentation package.
pip install traceloop-sdk opentelemetry-instrumentation-langchain
2

Set up environment variables

Configure the Traceloop SDK to send traces to Scorecard. Get your Scorecard API key from Settings.
export TRACELOOP_API_KEY="<your_scorecard_api_key>"
export TRACELOOP_BASE_URL="https://tracing.scorecard.io/otel"
Replace <your_scorecard_api_key> with your actual Scorecard API key (starts with ak_).
3

Initialize tracing

Initialize the Traceloop SDK with LangChain instrumentation before importing LangChain modules.
Import order matters! You must initialize Traceloop before importing any LangChain modules to ensure all calls are properly instrumented.
from traceloop.sdk import Traceloop
from traceloop.sdk.instruments import Instruments

Traceloop.init(
    disable_batch=True,
    instruments={Instruments.LANGCHAIN}
)

# Now import your LangChain modules
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
4

Run your LangChain application

With tracing initialized, run your LangChain application. All LLM calls, chain executions, and agent actions are automatically traced.Here’s a full example:
example.py
from traceloop.sdk import Traceloop
from traceloop.sdk.instruments import Instruments

Traceloop.init(
    disable_batch=True,
    instruments={Instruments.LANGCHAIN}
)

# Then import LangChain
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

# Create a simple chain
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}")
])

model = ChatOpenAI(model="gpt-4o-mini")
chain = prompt | model

# Run the chain - this will be traced
response = chain.invoke({"input": "What is the capital of France?"})
print(response.content)
You may see Failed to export batch warnings in the console. These can be safely ignored - your traces are still being captured and sent to Scorecard successfully.
5

View traces in Scorecard

Navigate to the Records page in Scorecard to see your LangChain traces.
It may take 1-2 minutes for traces to appear on the Records page.
Records page showing LangChain tracesRecords page showing LangChain traces
Click on any record to view the full trace details, including chain execution, LLM calls, and token usage.
Trace details viewTrace details view

What Gets Traced

OpenLLMetry automatically captures comprehensive telemetry from your LangChain applications:
Trace DataDescription
LLM CallsEvery LLM invocation with full prompt and completion
ChainsChain executions with inputs, outputs, and intermediate steps
AgentsAgent reasoning steps, tool selections, and action outputs
RetrieversDocument retrieval operations and retrieved content
Token UsageInput, output, and total token counts per LLM call
ErrorsAny failures with full error context and stack traces

Next Steps