Cribl LogStream can send data to various Destinations, including Splunk, Kafka, Kinesis, InfluxDB, Snowflake, Databricks, TCP JSON, and many others.
Destinations that accept events in real time are referred to as streaming Destinations:
- Splunk Single Instance
- Splunk Load Balanced
- Splunk HEC
- AWS Kinesis Streams
- AWS CloudWatch Logs
- AWS SQS
- TCP JSON
- Azure Event Hubs
- Azure Monitor Logs
- StatsD Extended
- SNMP Trap
- New Relic
- Sumo Logic
Destinations that accept events in groups or batches are referred to as non-streaming Destinations:
The S3 Compatible Stores Destination can be adapted to send data to downstream services like Databricks and Snowflake, for which LogStream currently has no preconfigured Destination. For details, please contact Cribl Support.
LogStream also provides these special-purpose Destinations:
- Output Router: Flexible "meta-destination." Here, you can configure rules that route data to multiple outputs.
- DevNull: An output that simply drops events. Preconfigured and active when you install LogStream, so it requires no configuration. Useful for testing.
- Default: Here, you can specify a default output from among your configured Destinations.
Cribl LogStream uses a staging directory in the local filesystem to format and write outputted events before sending them to configured Destinations. After a set of conditions is met – typically file size and number of files, further details below – data is compressed and then moved to the final Destination.
An inventory of open, or in-progress, files is kept in the staging directory's root, to avoid having to walk that directory at startup. This can get expensive if staging is also the final directory. At startup, Cribl LogStream will check for any leftover files in progress from prior sessions, and will ensure that they're moved to their final Destination. The process of moving to the final Destination is delayed after startup (default delay: 30 seconds). Processing of these files is paced at one file per service period (which defaults to 1 second).
Several conditions govern when files are closed and rolled out:
File reaches its configured maximum size.
File reaches its configured maximum open time.
File reaches its configured maximum idle time.
If a new file needs to be open, Cribl LogStream will enforce the maximum number of open files, by closing files in the order in which they were opened.
Data is delivered to all Destinations on an at-least-once basis. When a Destination is unreachable, there are three possible behaviors:
- Block - Cribl LogStream will block incoming events.
- Drop - Cribl LogStream will drop events addressed to that Destination.
- Queue - Cribl LogStream will Persistent-Queue events to that Destination.
You can configure the desired behavior through a Destination's Backpressure Behavior option. If this option is not present, Cribl LogStream's default behavior is to Block.
For each Destination type, you can create multiple definitions, depending on your requirements.
To configure Destinations, select Data > Destinations, select the desired type from the tiles or the left menu, then click + Add New.
To capture data from a single enabled Destination, you can do so directly from the Destinations UI instead of using the Preview pane. To initiate an immediate capture, click the Live button on the Destination's's configuration row.
You can also start an immediate capture from within an enabled Destination's configuration modal, by clicking the modal's Live Data tab.
Updated about a month ago