Skip to main content
GET
https://api.traceloop.com
/
v2
/
warehouse
/
spans
Get Spans
curl --request GET \
  --url https://api.traceloop.com/v2/warehouse/spans \
  --header 'Authorization: Bearer <token>'
{
  "spans": {
    "data": [
      {}
    ],
    "page_size": 123,
    "total_results": {},
    "next_cursor": "<string>"
  },
  "environment": "<string>",
  "timestamp": {},
  "trace_id": "<string>",
  "span_id": "<string>",
  "parent_span_id": "<string>",
  "trace_state": "<string>",
  "span_name": "<string>",
  "span_kind": "<string>",
  "service_name": "<string>",
  "resource_attributes": {},
  "scope_name": "<string>",
  "scope_version": "<string>",
  "span_attributes": {},
  "duration": {},
  "status_code": "<string>",
  "status_message": "<string>",
  "prompts": {},
  "completions": {},
  "input": "<string>",
  "output": "<string>"
}
Retrieve spans from the data warehouse with flexible filtering and pagination options. This endpoint returns spans from the environment associated with your API key. You can filter by time ranges, workflows, attributes, and more.

Request Parameters

from_timestamp_sec
int64
required
Start time in Unix seconds timestamp.
to_timestamp_sec
int64
End time in Unix seconds timestamp.
workflow
string
Filter spans by workflow name.
span_name
string
Filter spans by span name.
attributes
map[string]string
Simple key-value attribute filtering. Any query parameter not matching a known field is treated as an attribute filter.Example: ?llm.vendor=openai&llm.request.model=gpt-4
sort_order
string
Sort order for results. Accepted values: ASC or DESC. Defaults to ASC.
sort_by
string
Field to sort by. Supported values:
  • timestamp - Span creation time
  • duration_ms - Span duration in milliseconds
  • span_name - Name of the span
  • trace_id - Trace identifier
  • total_tokens - Total token count
  • traceloop_workflow_name - Workflow name
  • traceloop_entity_name - Entity name
  • llm_usage_total_tokens - LLM token usage
  • llm_response_model - LLM model used
cursor
string
Pagination cursor for fetching the next set of results. Use the next_cursor value from the previous response.
limit
int
Maximum number of spans to return per page.
filters
FilterCondition[]
Array of filter conditions to apply to the query. Each filter should have id, value, and operator fields. Filters must be URL-encoded JSON.Filter structure:
[{"id": "field_name", "operator": "equals", "value": "value"}]
Supported operators:
OperatorDescription
equalsExact match
not_equalsNot equal to value
greater_thanGreater than (numeric)
greater_than_or_equalGreater than or equal (numeric)
less_thanLess than (numeric)
less_than_or_equalLess than or equal (numeric)
containsString contains value
starts_withString starts with value
inValue in list (use with array)
not_inValue not in list (use with array)
existsField exists (no value needed)
not_existsField does not exist (no value needed)
Example - Filter by LLM vendor:
?filters=[{"id":"llm.vendor","operator":"equals","value":"openai"}]

Response

Returns a paginated response containing span objects:
spans
object

Span Object

environment
string
The environment where the span was captured.
timestamp
int64
The timestamp when the span was created (Unix milliseconds).
trace_id
string
The unique trace identifier.
span_id
string
The unique span identifier.
parent_span_id
string
The parent span identifier.
trace_state
string
The trace state information.
span_name
string
The name of the span.
span_kind
string
The kind of span (e.g., SPAN_KIND_CLIENT, SPAN_KIND_INTERNAL).
service_name
string
The name of the service that generated the span.
resource_attributes
map
Key-value pairs of resource attributes.
scope_name
string
The instrumentation scope name.
scope_version
string
The instrumentation scope version.
span_attributes
map
Key-value pairs of span attributes (e.g., llm.vendor, llm.request.model).
duration
int64
The duration of the span in milliseconds.
status_code
string
The status code of the span (e.g., STATUS_CODE_UNSET, STATUS_CODE_ERROR).
status_message
string
The status message providing additional context.
prompts
map
Prompt data associated with the span (for LLM calls).
completions
map
Completion data associated with the span (for LLM calls).
input
string
Input data for the span.
output
string
Output data for the span.

Example Response

{
  "spans": {
    "data": [
      {
        "environment": "production",
        "timestamp": 1734451200000,
        "trace_id": "a1b3c4d5e6f7a8b9c0d1e2f3a4b5c6d7",
        "span_id": "1a2b3c4d5e6f7a8b",
        "parent_span_id": "9f8e7d6c5b4a3210",
        "trace_state": "",
        "span_name": "openai.chat",
        "span_kind": "SPAN_KIND_CLIENT",
        "service_name": "my-llm-app",
        "resource_attributes": {
          "service.name": "my-llm-app",
          "telemetry.sdk.language": "python",
          "telemetry.sdk.name": "opentelemetry",
          "telemetry.sdk.version": "1.38.0"
        },
        "scope_name": "opentelemetry.instrumentation.openai.v1",
        "scope_version": "0.47.5",
        "span_attributes": {
          "llm.vendor": "openai",
          "llm.request.model": "gpt-4",
          "llm.response.model": "gpt-4-0125-preview",
          "llm.usage.input_tokens": "150",
          "llm.usage.output_tokens": "85",
          "llm.usage.total_tokens": "235",
          "traceloop.workflow.name": "customer_support"
        },
        "duration": 1850,
        "status_code": "STATUS_CODE_UNSET",
        "status_message": "",
        "prompts": {
          "llm.prompts.0.role": "system",
          "llm.prompts.0.content": "You are a helpful assistant.",
          "llm.prompts.1.role": "user",
          "llm.prompts.1.content": "What is the weather like today?"
        },
        "completions": {
          "llm.completions.0.role": "assistant",
          "llm.completions.0.content": "I don't have access to real-time weather data...",
          "llm.completions.0.finish_reason": "stop"
        },
        "input": "",
        "output": ""
      }
    ],
    "page_size": 50,
    "total_results": 1250,
    "next_cursor": "1734451200000"
  }
}

Pagination

To paginate through results:
  1. Make an initial request without a cursor
  2. Use the next_cursor value from the response in subsequent requests
  3. Continue until next_cursor is empty or you’ve retrieved all needed data
# Example Filter: [{"id":"llm.vendor","operator":"equals","value":"openai"}]
#
# First request with filter (URL-encoded)
curl "https://api.traceloop.com/v2/warehouse/spans?from_timestamp_sec=1702900800&limit=50&filters=%5B%7B%22id%22%3A%22llm.vendor%22%2C%22operator%22%3A%22equals%22%2C%22value%22%3A%22openai%22%7D%5D" \
  -H "Authorization: Bearer YOUR_API_KEY"

# Next page (using next_cursor from previous response)
curl "https://api.traceloop.com/v2/warehouse/spans?from_timestamp_sec=1702900800&limit=50&cursor=1734451200000&filters=%5B%7B%22id%22%3A%22llm.vendor%22%2C%22operator%22%3A%22equals%22%2C%22value%22%3A%22openai%22%7D%5D" \
  -H "Authorization: Bearer YOUR_API_KEY"