Introduction
Composo’s tracing SDK enables you to capture and evaluate LLM calls from your agent applications in real-time. Currently supporting DIY agents built on OpenAI - with support for Anthropic, LangChain/LangGraph and other SDKs to come.Why Tracing Matters
Many agent frameworks abstract away the underlying LLM calls, making it difficult to understand what’s happening under the hood and evaluate performance effectively. Many evaluation platforms only let you send traces to a remote system and wait to view results later. Composo gives you the best of both worlds: trace and evaluate immediately, or view your traces in our platform or any of your own observability tooling, spreadsheets or CICD seamlessly. By instrumenting your LLM calls and marking agent boundaries, you can evaluate performance in real-time and take action right away - allowing adjustment and feedback in real time before it gets seen by your users.Key Features
- Mark Agent Boundaries: Use
AgentTracer
context manager or@agent_tracer
decorator to define which LLM calls belong to which agent - Hierarchical Tracing: Support for nested agents to model complex multi-agent architectures
- Independent Evaluation: Each agent’s performance is evaluated separately with average, min, max and standard-deviation statistics reported per agent
- Flexible Evaluation: Get evaluation results instantly in your code, or view traces in the Composo platform for deeper analysis (or through seamless sync with any observability platform like Grafana, Sentry, Langfuse, LangSmith, Braintrust)
Framework Support
- Currently Supported: Agents built on OpenAI LLMs
- Coming Soon: Anthropic, Langchain, OpenAI Agents, and other popular frameworks
Quickstart
This guide walks you through adding tracing to your agent application in 3 steps. We’ll start with a simple multi-agent application and add tracing incrementally.Starting Code
Here’s a simple multi-agent application we want to trace:Step 1: Install and Initialize
Install the Composo SDK and initialize tracing for OpenAI.Step 2: Mark Your Agent Boundaries
Wrap your agent logic withAgentTracer
or @agent_tracer
to mark boundaries.
For the function-based agent, add the decorator:
AgentTracer
context manager:
tracer
object from the root AgentTracer
is needed for evaluation in Step 3.
Step 3: Evaluate Your Trace
Add evaluation after your agents complete:Complete Example
API Reference
ComposoTracer.init
Initializes the Composo tracing system and instruments specified LLM libraries to automatically capture their API calls.Parameters
- instruments (
list[Instruments]
): List of LLM libraries to instrument for tracing. Currently supported:Instruments.OPENAI
- Instruments OpenAI client to trace all chat completion calls
Usage
@agent_tracer
Decorator that marks all LLM calls within a function as belonging to a specific agent. Use this for functional agent implementations.Parameters
- name (
str
): The name of the agent for tracing purposes
Usage
AgentTracer
Context manager that marks all LLM calls within its scope as belonging to a specific agent. Returns a tracer object that can be used for evaluation.Parameters
- name (
str
): The name of the agent for tracing purposes
Returns
- tracer: Tracer object containing the
trace
attribute with captured LLM calls
Usage
Nested Agents
AgentTracer supports nesting to model hierarchical agent architectures:evaluate_trace
Evaluates captured LLM traces against specified criteria. Composo evaluates each agent independently and reports statistics (average, min, max, std) for scores within each agent.Parameters
- trace: Trace object returned by
AgentTracer
context manager (accessed viatracer.trace
) - criteria: Evaluation criteria to apply to the trace (e.g.,
criteria.agent
)
Usage
Next Steps
- Read our Agent Evaluation Blog - Deep dive into evaluation strategies
- Explore the Criteria Library - Find more pre-built criteria