Cribl LogStream – Docs

Cribl LogStream Documentation

Questions? We'd love to help you! Meet us in #Cribl Community Slack (sign up)
Download entire manual as PDF - v2.3.3

Sources

Cribl LogStream can receive data from various Sources, including Splunk, HTTP, Elastic Beats, Kinesis, Kafka, TCP JSON, and many others.

Push and Pull Sources

PUSH Sources

Supported data Sources that send to Cribl LogStream:

Data from these Sources is normally sent to a set of LogStream Workers through a loadbalancer. Some Sources, such as Splunk forwarders, have native loadbalancing capabilities, so you should point these directly at LogStream.

PULL Sources

Supported Sources that Cribl LogStream fetches data from:

Internal Sources

Sources that are internal to Cribl LogStream:

Configuring and Managing Sources

For each Source type, you can create multiple definitions, depending on your requirements.

To configure Sources, select Data > Sources, select the desired type from the tiles or the left menu, and then click + Add New.

Backpressure Behavior

On the Destination side, you can configure how each LogStream output will respond to a backpressure situation – a situation where its in-memory queue is overwhelmed with data.

All Destinations default to Block mode, in which they will refuse to accept new data until the downstream receiver is ready. Here, LogStream will back-propagate block signals through the Source, all the way back to the sender (if it supports backpressure, too).

All Destinations also support Drop mode, which will simply discard new events until the receiver is ready.

Several Destinations also support a Persistent Queue option to minimize data loss. Here, the Destination will write data to disk until the receiver is ready. Then it will drain the disk-buffered data in FIFO (first in, first out) order. See Persistent Queues for details about all three modes, and about Persistent Queue support.

Other BackPressure Options

The S3 Source provides a configurable Advanced Settings > Socket timeout option, to prevent data loss (partial downloading of logs) during backpressure delays.

Diagnosing Backpressure Errors

When backpressure affects HTTP Sources (Splunk HEC, HTTP/S, Raw HTTP/S, and Kinesis Firehose), LogStream internal logs will show a 503 error code.

Updated 16 days ago

Sources


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.