OpenLLMetry is an open source project that allows you to easily start monitoring and debugging the execution of your LLM app. Tracing is done in a non-intrusive way, built on top of OpenTelemetry. You can choose to export the traces to Traceloop, or to your existing observability stack.

You can use OpenLLMetry whether you use a framework like LangChain, or directly interact with a foundation model API.

import openai
from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow


def create_joke():
    completion = openai.ChatCompletion.create(
        messages=[{"role": "user", "content": "Tell me a joke about opentelemetry"}],

    return completion.choices[0].message.content

Getting Started

Select from the following guides to learn more about how to use OpenLLMetry:

Was this page helpful?