These docs are for Cribl Edge 4.7 and are no longer actively maintained.
See the latest version (4.13).
OTLP Logs
The OTLP Logs Function ingests logs sent by the Cribl Edge OTel Source, normalizes (that is, formats) and optionally batches them, and sends them on to the Cribl Edge OTel Destination.
How the OTel Source Transforms Log Events
The events that this Function processes can only come from the Cribl Edge OTel Source which itself always ingests events in the form of OTLP logs protocol buffers (protobufs). These protobufs are structures that nest objects within other objects. When you get to the bottom of the nested structure, what you’ve got is log entries!
Each OTLP logs protobuf contains top-level ResourceLogs
objects that associate an OTLP Resource with one or more log entries. To get familiar with the internal nomenclature of the OTLP logs protobuf, let’s look at its nested objects from the top down.
Object | Description |
---|---|
Log protobuf message | Contains one or more ResourceLogs |
ResourceLogs | Contains one Resource and one or more ScopeLogs |
ScopeLogs | Contains one Scope and one or more LogRecords |
LogRecord | The log entry |
The OTel Source takes these OTLP log protobufs and transforms them into Cribl events that the OTLP Logs Function can ingest.
The OTel Source can send Cribl events in two forms: with or without logs extracted. Which form it sends depends on whether Advanced Settings > Extract logs is turned on or off in the Source.
When Extract logs is off:
The OTLP Logs Function cannot batch the incoming events, even when Batch OTLP logs is turned on. The Function just cleans up the events and sends them on.
The Cribl events sent to the OTLP Logs Function can include log events, non-log events, or both. The OTLP Function will always send log events through to the OTel Destination. If non-log events arrive, the Function will send them through or drop them, depending on whether its Drop non-log events setting is turned on or off.
When Extract logs is on:
The OTLP Logs Function will batch the incoming events if Batch OTLP logs is turned on. Otherwise the Function just cleans up the events and sends them on.
Compared to the original OTLP log protobufs, the Cribl events sent by the OTel Source have a flatter structure: the
resource
andscope
objects and the extracted records are all peers. The OTLP Logs Function detects this, and if batching is enabled, batches the events such that for eachresource
object there is onescope_logs
object containing multiple pairs ofscope
andlog_records
objects.
How This Function Batches Logs
For OTLP logs, batching means grouping log events that arrive within a specified time window and whose OTLP Resource Attributes are identical. For example, if the Function receives a thousand events in a period of 30 seconds and those events originate from a set of 100 servers, each event will have a set of Resource Attributes, one or more of which will describe the server where it came from. For simplicity, let’s assume that the Resource Attributes of all these events are identical except for the ones that describe the server. The Function will create separate batches for each of those 100 servers because the Resource Attributes differentiate events based on which server it comes from.
Each incoming log event contains both its resource attributes and the log entry itself. In a batch, the resource attributes need only be written once, greatly reducing the amount of data sent over the wire.
For detailed insights into events that passed through the Function but were dropped, set the logging level for channel
func:otlp_logs
todebug
, and there will be a stats log message output once per minute with the relevant information.
Usage
Ideally, this Function should process events and/or logs after other Functions have formatted them. Therefore Cribl recommends that you place this Function last in the Pipeline.
If no metric events appear in Simple or Full Preview, try enabling Show Dropped Events: If Cribl Edge processed and dropped events, there’s an issue with your data. Either the incoming events are malformed, or, the resource attributes in those events do not match any of the resource attribute prefixes you configured. See Resource attribute prefixes below.
Filter: Filter expression (JS) that selects data to feed through the Function. The default true
setting passes all events through the Function.
Description: Simple description of this Function’s purpose in this Pipeline. Defaults to empty.
Final: If toggled to Yes
, stops feeding data to the downstream Functions. Defaults to No
.
Drop non-log events: Whether or not to drop any non-log events. Defaults to No
.
Batch OTLP logs: Whether to batch OTLP logs by shared top-level resource
attributes. The Function only tries to batch events where the logs have been extracted – that is, where the sending OTel Source had Advanced Settings > Extract logs enabled when processing the events.
When enabled, exposes these additional controls:
Batch size: The number of log records after which a batch will be sent, regardless of whether the timeout duration has been reached.
Batch timeout (ms): Time duration after which a batch will be sent regardless of size. Defaults to
200
milliseconds.Batch size limit (kb): Maximum batch size in kilobytes.
0
means no limit.Batch log metadata keys: Optionally, add metadata keys that you want the Function to create batcher instances for, in addition to any batching based on resource attributes. When set, this processor will create one batcher instance per distinct combination of values in the metadata.
Metadata cardinality limit: Limits the number of unique combinations of metadata key values that the batcher will process. Once the limit has been reached, the Function will drop events whose metadata key values constitute a new combination. Defaults to
1000
.
Basic Example
Filter: __inputId=='open_telemetry:open_telemetry'
This will process all events from an OTel Source; in this example we assume that the Source has Advanced Settings > Extract logs turned off. Metrics and traces will either be dropped, or emitted without modification, depending on configuration. Unextracted logs will be formatted and emitted; extracted logs will be formatted and either batched or emitted (again depending on configuration).
The Function converts the Cribl event such that the Cribl Edge OTel Destination can ingest it and convert it to an OTLP log event.
See the sample incoming and outgoing events.
Advanced Example
Here, we have two incoming events sent by a OTel Source with Advanced Settings > Extract logs turned on.
The Function, with Batch OTLP logs turned on, batches the Cribl events such that the Cribl Edge OTel Destination can ingest the batch and convert to a batched OTLP log event.
See the sample incoming and outgoing events.
Sample Events
Basic Example Incoming Event
From OTel Source with extract logs turned off.
{
"resource": {
"attributes": {
"telemetry.sdk.language": "python",
}
},
"scope_logs": [{
"scope": {
"name": "opentelemetry.sdk._logs._internal",
"version": "unknown",
},
"log_records":[{
"time_unix_nano": 1717010319837985500,
"observed_time_unix_nano": 0,
"body": "Recommendation service started, listening on port 9001",
"dropped_attributes_count": 0,
"severity_number": 9,
"severity_text": "INFO"
}],
}],
"_time": 1717010325.183
}
Basic Example Outgoing Event
When the OTLP Function processes the incoming event above, the event below is the result. It does not matter whether Batch OTLP logs is turned on or off – the Function will not try to batch because in the incoming event, logs were not extracted.
{
"resource": {
"attributes": [{
"key": "telemetry.sdk.language",
"value": {
"string_value": "python"
}
}]
},
"scope_logs": [{
"scope": {
"name": "opentelemetry.sdk._logs._internal",
"version": "unknown",
"attributes": []
},
"log_records":[{
"time_unix_nano": 1717010319837985500,
"observed_time_unix_nano": 0,
"body": { "string_value": "Recommendation service started, listening on port 9001" },
"attributes": [],
"dropped_attributes_count": 0,
"severity_number": 9,
"severity_text": "INFO"
}],
}],
"_time": 1717010325.183
}
Advanced Example Incoming Events
In order to show batching in the outgoing events, let’s use two incoming events from an OTel Source where Advanced Settings > Extract logs is turned on.
Incoming Sample Event 1
{
"schema_url": "https://opentelemetry.io/schemas/1.23.1",
"resource": {
"attributes": {
"telemetry.sdk.language": "python",
}
},
"scope": {
"name": "kafka.server.KafkaRaftServer",
"version": "unknown",
},
"time_unix_nano": 1717016223073301200,
"observed_time_unix_nano": 1717016223073302800,
"body": "[KafkaRaftServer nodeId=1] Kafka Server started",
"dropped_attributes_count": 0,
"severity_number": 9,
"severity_text": "INFO",
"_time": 1717016223.0733013
}
Incoming Sample Event 2
{
"schema_url": "https://opentelemetry.io/schemas/1.23.1",
"resource": {
"attributes": {
"telemetry.sdk.language": "python",
}
},
"scope": {
"name": "kafka.server.KafkaBoatServer",
"version": "unknown",
},
"time_unix_nano": 1717016223073401200,
"observed_time_unix_nano": 1717016223073402800,
"body": "[KafkaBoatServer nodeId=1] Boat Server started",
"dropped_attributes_count": 0,
"severity_number": 9,
"severity_text": "INFO",
"_time": 1717016224.0733013
}
Advanced Example Outgoing Batched Events
When the OTLP Function processes the incoming events above, the event below is the result. Here, Batch OTLP logs is turned on – the Function is able to batch because in the incoming events, logs were extracted.
{
"resource": {
"attributes": [{
"key": "telemetry.sdk.language",
"value": {
"string_value": "python"
}
}]
},
"scope_logs": [
{
"scope": {
"name": "kafka.server.KafkaRaftServer",
"version": "unknown",
"attributes": []
},
"log_records":[{
"time_unix_nano": 1717016223073301200,
"observed_time_unix_nano": 1717016223073302800,
"body": { "string_value": "[KafkaRaftServer nodeId=1] Kafka Server started" },
"attributes": [],
"dropped_attributes_count": 0,
"severity_number": 9,
"severity_text": "INFO",
"_time": 1717016223.0733013
}],
},
{
"scope": {
"name": "kafka.server.KafkaBoatServer",
"version": "unknown",
"attributes": []
},
"log_records":[{
"time_unix_nano": 1717016223073401200,
"observed_time_unix_nano": 1717016223073402800,
"body": { "string_value": "[KafkaBoatServer nodeId=1] Boat Server started" },
"attributes": [],
"dropped_attributes_count": 0,
"severity_number": 9,
"severity_text": "INFO",
"_time": 1717016224.0733013
}],
}
],
"_time": 1717010325.183
}