Cribl LogStream – Docs

Cribl LogStream Documentation

Questions? We'd love to help you! Meet us in #Cribl Community Slack (sign up)
Download entire manual as PDF - v2.3.0

Pipelines

What Are Pipelines

After your data has been matched by a Route, it gets delivered to a Pipeline. A Pipeline is a list of Functions that work on the data. As with Routes, the order in which the Functions are listed matters.

📘

Functions in a Pipeline are evaluated in order, top‑>down.

How Do Pipelines Work

Events are always delivered to the beginning of a Pipeline via a Route. The data in the Stats column shown below are for the last 15 minutes.

Pipelines and Route inputs

📘

You can press the ] (right-bracket) shortcut key to toggle between the Preview pane and an expanded Pipelines display. This works as long as no field has focus.

Within the Pipeline, events are processed by each Function, in order. A Pipeline will always move events in the direction that points outside of the system. This is on purpose, to keep the design simple and avoid potential loops.

Pipeline Functions

Click the gear icon at top right to open the Pipeline's Settings. Here, you can attach the Pipeline to a Route. In the Settings' Async function timeout (ms) field, you can enter a buffer to adjust for Functions that might take much longer to execute than normal. (An example would be a Lookup Function processing a large lookup file.)

Pipeline Settings

Click Advanced Mode to edit the Pipeline's definition as JSON text. In this mode's editor, you can directly edit multiple values. You can also use the Import and Export buttons to copy and modify existing Pipeline configurations.

Advanced Pipeline Editing

📘

You can streamline the above display by organizing related Functions into Function groups.

Types of Pipelines


You can apply various Pipeline types at different stages of data flow. All Pipelines have the same basic internal structure (a series of Functions) – the types below differ only in their position in the system.

Input conditioning, processing, and output conditioning Pipelines

Pre-Processing Pipelines

These are Pipelines that are attached to a Source to condition (normalize) the events before they're delivered to a processing Pipeline. They are optional.

Typical use cases are event formatting, or applying Functions to all events of an input. (E.g., extract a message field before pushing events to various processing Pipelines.) You configure these pre-processing Pipelines on individual Sources. Fields extracted using pre-processing Pipelines are made available to Routes.

Processing Pipelines

These are "normal" event processing Pipelines.

Post-Processing Pipelines

These Pipelines are attached to a Destination to normalize the events before they're sent out. Typical use cases are applying Functions that transform or shape events per receiver requirements. (E.g., to ensure that a _time field exists for all events bound to a Splunk receiver.) You configure these post-processing Pipelines on individual Destinations.

Considerations

Functions in a Pipeline are equipped with their own filters. Even though filters are not required, we recommend using them as often as possible.

As with Routes, the general goal is to minimize extra work that a Function will do. The fewer events a Function has to operate on, the better the overall performance. For example, if a Pipeline has two Functions, f1 and f2, and if f1 operates on source 'foo' and f2 operates on source 'bar', it might make sense to apply source=='foo' versus source=='bar' filters on these two Functions, respectively.

Updated 4 days ago

Pipelines


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.