Cribl LogStream supports sending data to Azure Event Hubs. This is a streaming Destination type.
Select Data > Destinations, then select Azure > Event Hubs from the Data Destinations page's tiles or left menu. Click Add New to open the Event Hubs > New Destination modal, which provides the following fields.
Output ID: Enter a unique name to identify this Azure Event Hubs definition.
Brokers: List of Event Hub Kafka brokers to connect to. (E.g.,
yourdomain.servicebus.windows.net:9093.) Find the hostname in Shared Access Policies, in the host portion of the primary or secondary connection string.
Event Hub name: The name of the Event Hub (a.k.a., Kafka Topic) on which to publish events. Can be overwritten using the
Acknowledgments: Control the number of required acknowledgments. Defaults to
Record data format: Format to use to serialize events before writing to the Event Hub Kafka brokers. Defaults to
Compression: If present, change from the default
This option is removed as of LogStream 2.4.4, due to incompatibility on the Event Hubs side. In LogStream versions through 2.4.3, you must manually change the setting to
Nonein order to enable a stable connection with Event Hubs.
Backpressure behavior: Whether to block, drop, or queue events when all receivers in this group are exerting backpressure. Defaults to
This section is displayed when the Backpressure behavior is set to Persistent Queue.
Max file size: The maximum size to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to
Max queue size: The maximum amount of disk space the queue is allowed to consume. Once this limit is reached, queueing is stopped, and data blocking is applied. Enter a numeral with units of KB, MB, etc.
Queue file path: The location for the persistent queue files. This will be of the form:
your/path/here/<worker-id>/<output-id>. Defaults to
Compression: Codec to use to compress the persisted data, once a file is closed. Defaults to
Gzip is also available.
Enabled Defaults to
Validate server certs: Defaults to
No – and for Event Hubs, this must always be disabled.
Authentication parameters to use when connecting to brokers. Using TLS is highly recommended.
Enabled: Defaults to
Yes. (Toggling to
No hides the remaining settings in this group.)
SASL mechanism: SASL (Simple Authentication and Security Layer) authentication mechanism to use,
PLAIN is the only mechanism currently supported for Event Hub Kafka brokers.
Username: The username for authentication. For Event Hub, this should always be
- Password: Event Hubs primary or secondary connection string. From Microsoft's documentation, the format is:
Pipeline: Pipeline to process data before sending the data out using this output.
System fields: A list of fields to automatically add to events that use this output. By default, includes
cribl_pipe (identifying the LogStream Pipeline that processed the event). Supports wildcards. Other options include:
cribl_host– LogStream Node that processed the event.
cribl_wp– LogStream Worker Process that processed the event.
cribl_input– LogStream Source that processed the event.
cribl_output– LogStream Destination that processed the event.
Max record size (KB, uncompressed): Maximum size (KB) of each record batch before compression. Setting should be <
message.max.bytes settings in Kafka brokers. Defaults to
Max events per batch: Maximum number of events in a batch before forcing a flush. Defaults to
Flush period (sec): Maximum time between requests. Low settings could cause the payload size to be smaller than its configured maximum. Defaults to
Cribl LogStream uses a set of internal fields to assist in forwarding data to a Destination.
Fields for this Destination:
Updated 24 days ago