Cribl LogStream can receive data from various Sources, including Splunk, HTTP, Elastic Beats, Kinesis, Kafka, TCP JSON, and many others.
Supported data Sources that send to Cribl LogStream:
- Splunk TCP
- Splunk HEC
- Elasticsearch API
- TCP JSON
- TCP Raw
- Raw HTTP/S
- Kinesis Firehose
- SNMP Trap
Data from these Sources is normally sent to a set of LogStream Workers through a loadbalancer. Some Sources, such as Splunk forwarders, have native loadbalancing capabilities, so you should point these directly at LogStream.
Supported Sources that Cribl LogStream fetches data from:
- Kinesis Streams
- Azure Blob Storage
- Azure Event Hubs
- Office 365 Services
- Office 365 Activity
- Office 365 Message Trace
Sources that are internal to Cribl LogStream:
For each Source type, you can create multiple definitions, depending on your requirements.
To configure Sources, select Data > Sources, select the desired type from the tiles or the left menu, and then click + Add New.
To capture data from a single enabled Source, you can do so directly from the Sources UI instead of using the Preview pane. To initiate an immediate capture, click the Live button on the Source's configuration row.
You can also start an immediate capture from within an enabled Source's configuration modal, by clicking the modal's Live Data tab.
To accelerate your setup, LogStream ships with several common Sources configured for typical listening ports, but not switched on. Open, clone (if desired), modify, and enable any of these preconfigured Sources to get started quickly:
- Syslog – TCP Port 9514, UDP Port 9514
- Splunk TCP – Port 9997
- Splunk HEC – Port 8088
- TCP JSON – Port 10070
- TCP – Port 10060
- HTTP – Port 10080
- Elasticsearch API – Port 9200
- SNMP Trap – Port 9162
- Cribl Internal > CriblLogs – Internal
- Cribl Internal > CriblMetrics – Internal
On the Destination side, you can configure how each LogStream output will respond to a backpressure situation – a situation where its in-memory queue is overwhelmed with data.
All Destinations default to Block mode, in which they will refuse to accept new data until the downstream receiver is ready. Here, LogStream will back-propagate block signals through the Source, all the way back to the sender (if it supports backpressure, too).
All Destinations also support Drop mode, which will simply discard new events until the receiver is ready.
Several Destinations also support a Persistent Queue option to minimize data loss. Here, the Destination will write data to disk until the receiver is ready. Then it will drain the disk-buffered data in FIFO (first in, first out) order. See Persistent Queues for details about all three modes, and about Persistent Queue support.
The S3 Source provides a configurable Advanced Settings > Socket timeout option, to prevent data loss (partial downloading of logs) during backpressure delays.
Updated 19 days ago