When events enter a Pipeline, they're processed by a series of Functions. At its core, a Function is code that executes on an event, and it encapsulates the smallest amount of processing that can happen to that event.
The term "processing" means a variety of possible options: string replacement, obfuscation, encryption, event-to-metrics conversions, etc. For example, a Pipeline can be composed of several Functions – one that replaces the term
bar, another one that hashes
bar, and a final one that adds a field (say,
dc=jfk-42) to any event that matches
You can add as many Functions in a Pipeline as necessary, though the more you have, the longer it will take each event to pass through. Also, you can turn Functions On/Off within a Pipeline as necessary. This enables you to preserve structure as you optimize or debug.
You can reposition Functions up or down the Pipeline stack to adjust their execution order. Use a Function's left grab handle to drag and drop it into place.
Similar to the
Final toggle in Routes, the
Final toggle here controls the flow of events at the Function level. Its states are:
No(default): means that matching events processed by this Function will be passed down to the next Function.
Yes: means that this Function is the last one that will be applied to matching events. All Functions further down the Pipeline will be skipped. A Function with
Yeswill display an F indicator in the Pipeline stack.
LogStream is built on a shared-nothing architecture, where each Node and its Worker Processes operate separately, and process events independently of each other. This means that all Functions operate strictly in a Worker Process context – state is not shared across processes.
Cribl LogStream ships with several Functions out-of-the-box, and you can chain them together to meet your requirements. For more details, see individual Functions, and the Use Cases section, within this documentation.
For an overview of adding custom Functions to Cribl LogStream, see our blog post, Extending Cribl: Building Custom Functions.
Add GeoIP information to events:
Suppress events (e.g, duplicates, etc.):
Serialize events to CEF format (send to various SIEMs):
Serialize / change format (e.g., convert JSON to CSV):
Flatten nested structures (e.g., nested JSON):
Aggregate events in real-time (i.e. statistical aggregations):
Resolve hostname from IP address:
Reverse DNS (beta)
Extract numeric values from event fields, converting them to type
Send events out to a command or a local file, via
stdin, from any point in a Pipeline:
Convert an XML event's elements into individual events:
Duplicate events in the same Pipeline, with optional added fields:
Add a text comment within a Pipeline's UI, to label steps without changing event data:
A Function group is a collection of consecutive Functions that can be moved up and down a Pipeline's Functions stack together. Groups help you manage long stacks of Functions by streamlining their display. They are a UI visualization only: While Functions are in a group, those Functions maintain their global position order in the Pipeline.
Function groups work much like Route groups.
To build a group from any Function, click the Function's ••• (Options) menu, then select Group Actions > Create Group.
You'll need to enter a Group Name before you can save or resave the Pipeline. Optionally, enter a Description.
Once you've saved at least one group to a Pipeline, other Functions' ••• (Options) > Group Actions submenus will add options to Move to Group or Ungroup/Ungroup All.
You can also use a Function's left grab handle to drag and drop it into, or out of, a group. A saved group that's empty displays a dashed target into which you can drag and drop Functions.
Updated 6 days ago