LLM Telemetry Use Cases in Cribl
With large language model (LLM) application telemetry flowing into Cribl Search and Cribl Stream, you can explore and visualize LLM telemetry, and control where the data goes and how it is shaped.
The following topics are vendor-agnostic patterns you can map to the field names and span types your instrumentation exposes. For background on common semantic conventions, see resources such as OpenInference.
Explore the following use cases:
- Explore LLM telemetry in Cribl Search
- Route LLM telemetry to multiple Destinations
- Mask sensitive LLM prompts and completions
- Emit LLM cost and usage metrics from token counts
- Sample or throttle high-volume LLM telemetry
Typical Field Names
Exact field names vary by instrumentation. The examples in the guides use generic names like total_tokens, model, and estimated_cost_usd for clarity–substitute the fields available in your telemetry.
| Concept | Example field names |
|---|---|
| Model name | llm.request.model, model, request.model |
| Prompt tokens | llm.usage.prompt_tokens, prompt_tokens |
| Completion tokens | llm.usage.completion_tokens, completion_tokens |
| Total tokens | llm.usage.total_tokens, total_tokens |
| Per-request cost | llm.request.cost, llm.usage.cost, cost_usd |
| Span/operation type | span.kind, component, operation, llm.span_type |
| Environment/deployment | deployment.environment, env, environment |
| User identifiers | user.id, tenant.id, customer_id, account_id |