Skip to main content
Version: 3.2

Event Processing Order

The expanded schematic below shows how all events in the Cribl LogStream ecosystem are processed linearly, from left to right.

LogStream in great detail

Here are the stages of event processing:

  1. Sources: Data arrives from your choice of external providers. (LogStream supports Splunk, HTTP/S, ElasticĀ Beats, Amazon Kinesis/S3/SQS, Kafka, TCP raw or JSON, and many others.)

  2. Custom command: Optionally, you can pass this input's data to an external command before the data continues downstream. This external command will consume the data via stdin, will process it and send its output via stdout.

  3. EventĀ Breakers can, optionally, break up incoming bytestreams into discrete events. (Note that because EventĀ Breakers precede tokens, Breakers cannot see or act on tokens.)

  4. Auth tokens are applied at this point on Splunk HEC Sources. (Details here.)

  5. Fields/Metadata: Optionally, you can add these enrichments to each incoming event. You add fields by specifying key/value pairs, per Source, in a format similar to LogStream's Eval Function. Each key defines a field name, and each value is a JavaScript expression (or constant) used to compute the field's value.

  6. Auth tokens are applied at this point on HTTP-based Sources other than SplunkĀ HEC (e.g., ElasticsearchĀ API Sources).

  7. Pre-processing Pipeline: Optionally, you can use a single Pipeline to condition (normalize) data from this input before the data reaches the Routes.

  8. Routes map incoming events to Processing Pipelines and Destinations. A Route can accept data from multiple Sources, but each Route can be associated with only one Pipeline and one Destination.

  9. Processing Pipelines perform all event transformations. Within a Pipeline, you define these transformations as a linear series of Functions. AĀ Function is an atomic piece of JavaScript code invoked on each event.

  10. Post-processing Pipeline: Optionally, you can append a Pipeline to condition (normalize) data from each Processing Pipeline before the data reaches its Destination.

  11. Destinations: Each Route/Pipeline combination forwards processed data to your choice of streaming or storage Destination. (LogStream supports Splunk, Syslog, Elastic, Kafka/Confluent, Amazon S3, Filesystem/NFS, and many other options.)

Pipelines Everywhere

All Pipelines have the same basic internal structure ā€“ they're a series of Functions. The three Pipeline types identified above differ only in their position in the system.