# Composo ## Docs - [Attach Comment Endpoint](https://docs.composo.ai/api-reference/annotations/attach-comment-endpoint.md) - [Attach Rating Endpoint](https://docs.composo.ai/api-reference/annotations/attach-rating-endpoint.md) - [Reward](https://docs.composo.ai/api-reference/evals/reward.md): Evaluate LLM output against specified criteria. Score on a continuous 0-1 scale. - [Get My Rate Limits](https://docs.composo.ai/api-reference/rate-limits/get-my-rate-limits.md): Get rate limits for the authenticated user's domain - [List Traces Endpoint](https://docs.composo.ai/api-reference/traces/list-traces-endpoint.md) - [FAQs](https://docs.composo.ai/documentation/FAQs/common-questions.md) - [About](https://docs.composo.ai/documentation/community-examples/about.md) - [Multi-turn Evaluation](https://docs.composo.ai/documentation/community-examples/multi-turn.md): Strategies for testing multi-turn agent conversations where agent responses are non-deterministic. - [Agent Evaluation](https://docs.composo.ai/documentation/cookbooks/agent-evaluation.md): Evaluate the performance of your agentic systems with Composo's comprehensive agent framework. - [Anonymization](https://docs.composo.ai/documentation/cookbooks/anonymization.md): Anonymizing your data while maintaining evaluation quality - [RAG Evaluation](https://docs.composo.ai/documentation/cookbooks/rag-evaluation.md): Battle-tested metrics for retrieval-augmented generation including faithfulness, completeness, and precision. - [Response Quality Evaluation](https://docs.composo.ai/documentation/cookbooks/response-evaluation.md): Evaluate custom quality aspects of LLM responses - [Models](https://docs.composo.ai/documentation/getting-started/models.md) - [Quickstart](https://docs.composo.ai/documentation/getting-started/quickstart.md): Ship AI agents that actually work in production - [Criteria Library](https://docs.composo.ai/documentation/guides/criteria-library.md): Here's a range of criteria that we've seen to help when writing your own - [How to write effective criteria](https://docs.composo.ai/documentation/guides/criteria-writing.md) - [Ground Truth Evaluation](https://docs.composo.ai/documentation/guides/ground-truths.md): Leverage your labeled data to create precise evaluation metrics - [Langfuse](https://docs.composo.ai/documentation/monitoring/langfuse.md): How to use Composo in combination with Langfuse - [Composo x Metabase](https://docs.composo.ai/documentation/monitoring/metabase.md): Explore and Visualize Composo Evaluations - [Tags](https://docs.composo.ai/documentation/monitoring/tags.md): Tag and categorize your evaluations for better organization and filtering - [Agent Tracing](https://docs.composo.ai/documentation/monitoring/tracing.md): Trace the LLM calls made by your agent framework - [Unit Testing](https://docs.composo.ai/documentation/testing/unit-testing.md): Integrate Composo evaluations into your unit testing workflow - [AsyncComposo](https://docs.composo.ai/python-sdk-reference/async-composo-client.md): Asynchronous client for high-performance batch evaluations - [Composo](https://docs.composo.ai/python-sdk-reference/composo-client.md): Synchronous client for evaluating LLM conversations - [Tracing](https://docs.composo.ai/python-sdk-reference/tracing.md): Track LLM interactions and multi-agent conversations ## OpenAPI Specs - [openapi](https://platform.composo.ai/api/evals-docs/openapi.json) - [openapi-spec](https://docs.composo.ai/openapi-spec.json)