Boost Your LLM Observability with Traces and Spans

Imagine these scenarios: The performance of your LLM application drops suddenly, you lose track of which LLM calls you have made, or you do not notice potential security leaks. We know that dealing with LLMs can become challenging and introducing LLMOps practices is crucial. That is why we offer tracing with Scorecard!

Trace Your LLM Application With Scorecard

Traces View in the Scorecard UI
Traces View in the Scorecard UI

LLM observability refers to the ability to gain insight into the behavior and performance of Large Language Models (LLMs) across their entire lifecycle. This LLMOps practice is essential for ensuring the reliability, performance, and security of LLM applications, especially given their complexity and the challenges associated with debugging and troubleshooting. To properly monitor an LLM application, tracing various aspects is important. But what is Tracing?

Tracing records the flow and path of a single invocation of a LLM system across the different components of such an LLM system. Relevant events, errors, and activities are documented with traces.

Traces are ultimately a hierarchical collection of records that contain information about the function calls made, variables used, and messages exchanged between different LLM components. Traces provide a detailed record of what happened during the entire execution of the LLM system (from user input to LLM output), aiding in debugging, troubleshooting, and auditing. For example, this helps to identify where potential delays occur and how services interact with each other.

A single trace consists of a series of time intervals known as spans. Spans are the building blocks of traces and represent a specific operation or sub-activity of a trace, such as sending the user input as a request to a LLM. Each span contains metadata such as timestamps, duration, and contextual information. In summary, a trace represents an overall record of the activity of a LLM system, while a span is a more focused view of a specific portion of that LLM system activity.

Inspect a LLM Trace and Its Individual Spans

Inspect the metadata of a trace, such as its duration and overall token count. This can help in quickly identifying anomalies and very long durations or very low/high token counts, which could be indicators that something went wrong. Additionally, get an overview of each individual span that belongs to a trace and dive into further details!

Trace Details with Individual Spans
Trace Details with Individual Spans

Access Span Details on a Granular Level

Besides metadata, spans can provide details about requests sent to the LLM, such as a request to the ChatCompletion endpoint of OpenAI. In this case, it shows the individual prompt messages that were sent as input, the LLM’s response as output, and further details that can help in debugging and inspecting your LLM application.

Span Details in a Trace
Span Details in a Trace

Check Out the Tracing Demo

Check out the cookbooks that show you end-to-end Scorecard implementations and the demo project that shows you how to use Tracing with Scorecard!