← Back

Introducing OpenLLMetry — Extending OpenTelemetry to LLMs

Last month, we shared a set of open-source extensions that we built on top of OpenTelemetry. These extensions provide visibility into LLM and AI applications — prompts, vector DBs, and more. And today we’re excited to announce the set’s general availability on Python.

We know that there’s already a decent amount of tools for LLM observability, some open-source and some not. But the problem we found with all of them was that they were closed-protocol by design, forcing you to use their SDK, their platform, and their proprietary framework for running your LLMs. 

The cloud observability domain has evolved in a similar manner: it started with companies like Datadog and New Relic offering closed-protocol agents that monitor your system. You’d install the agent in your cloud environment, and that agent would then send observability data. These agents essentially vendor-locked you into their platforms.

What Is OpenTelemetry?

Then came OpenTelemetry. It’s an open protocol for sending logs, metrics, and traces from production systems. For organizations, the big advantage of using an open protocol is that it dramatically decreases the cost of adoption for new observability technologies. Before OpenTelemetry, if you wanted to start using Datadog (for example), you’d have to install a special agent in all your products. Now, if you already have OpenTelemetry instrumented, adding Datadog is just a small configuration change away. 

All major observability platforms now support OpenTelemetry alongside their proprietary agents. The protocol itself contains a set of instrumentations that automatically provide visibility into different components of the system — such as instrumentations for PostgresDB calls, HTTP calls, and Kafka. And it has been gaining popularity in the past couple of years.

Source: NPM Trends

OpenTelemetry for LLM Observability Tools

Now let’s go back to our gen-AI space. It's still early, so now is the perfect time to define an open protocol for observability instead of relying on proprietary ones.

So we’ve built OpenLLMetry, which extends OpenTelemetry with AI-related instrumentations. We’ve already built instrumentations for LLMs like OpenAI, Anthropic, and Cohere; vector databases like Pinecone; and LLM frameworks like LangChain and Haystack. These instrumentations let you trace LLM responses and prompts, your model performance, token usage, and more.

It’s based on OpenTelemetry and uses the same protocol, so you can plug OpenLLMetry into Datadog, New Relic, Sentry, Honeycomb, and other services. 

We’ve also built an SDK that makes it easy to use all of these instrumentations, in case you’re not too familiar with OpenTelemetry.

Everything is written in Python, with TypeScript around the corner and licensed with Apache 2.0 (and will stay that way!).

Check it out, and start using it today to get instant observability for your LLM application, with your existing tools or with new tools — and most importantly, with the flexibility to choose and switch between them whenever you wish.