Cribl LogStream – Docs

Cribl LogStream Documentation

Questions? We'd love to help you! Meet us in #Cribl Community Slack (sign up here)
Download entire manual as PDF - v2.4.4

Azure Event Hubs

Cribl LogStream supports sending data to Azure Event Hubs. This is a streaming Destination type.

Configuring Cribl LogStream to Output to Azure Event Hubs

Select Data > Destinations, then select Azure > Event Hubs from the Data Destinations page's tiles or left menu. Click Add New to open the Event Hubs > New Destination modal, which provides the following fields.

General Settings

Output ID: Enter a unique name to identify this Azure Event Hubs definition.

Brokers: List of Event Hub Kafka brokers to connect to. (E.g., yourdomain.servicebus.windows.net:9093.) Find the hostname in Shared Access Policies, in the host portion of the primary or secondary connection string.

Event Hub name: The name of the Event Hub (a.k.a., Kafka Topic) on which to publish events. Can be overwritten using the __topicOut field.

Acknowledgments: Control the number of required acknowledgments. Defaults to Leader.

Record data format: Format to use to serialize events before writing to the Event Hub Kafka brokers. Defaults to JSON.

Compression: If present, change from the default Gzip to None.

🚧

This option is removed as of LogStream 2.4.4, due to incompatibility on the Event Hubs side. In LogStream versions through 2.4.3, you must manually change the setting to None in order to enable a stable connection with Event Hubs.

Backpressure behavior: Whether to block, drop, or queue events when all receivers in this group are exerting backpressure. Defaults to Block.

Persistent Queue Settings

📘

This section is displayed when the Backpressure behavior is set to Persistent Queue.

Max file size: The maximum size to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to 1 MB.

Max queue size: The maximum amount of disk space the queue is allowed to consume. Once this limit is reached, queueing is stopped, and data blocking is applied. Enter a numeral with units of KB, MB, etc.

Queue file path: The location for the persistent queue files. This will be of the form: your/path/here/<worker-id>/<output-id>. Defaults to $CRIBL_HOME/state/queues.

Compression: Codec to use to compress the persisted data, once a file is closed. Defaults to None; Gzip is also available.

TLS Settings (Client Side)

Enabled Defaults to Yes.

Validate server certs: Defaults to No – and for Event Hubs, this must always be disabled.

Authentication

Authentication parameters to use when connecting to brokers. Using TLS is highly recommended.

Enabled: Defaults to Yes. (Toggling to No hides the remaining settings in this group.)

SASL mechanism: SASL (Simple Authentication and Security Layer) authentication mechanism to use, PLAIN is the only mechanism currently supported for Event Hub Kafka brokers.

Username: The username for authentication. For Event Hub, this should always be $ConnectionString.

  • Password: Event Hubs primary or secondary connection string. From Microsoft's documentation, the format is:

    Endpoint=sb://<FQDN>/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<KeyValue>

    Example entry:

    Endpoint=sb://dummynamespace.servicebus.windows.net/;SharedAccessKeyName=dummyaccesskeyname;SharedAccessKey=5dOntTRytoC24opYThisAsit3is2B+OGY1US/fuL3ly=

Processing Settings

Post‑Processing

Pipeline: Pipeline to process data before sending the data out using this output.

System fields: A list of fields to automatically add to events that use this output. By default, includes cribl_pipe (identifying the LogStream Pipeline that processed the event). Supports wildcards. Other options include:

  • cribl_host – LogStream Node that processed the event.
  • cribl_wp – LogStream Worker Process that processed the event.
  • cribl_input – LogStream Source that processed the event.
  • cribl_output – LogStream Destination that processed the event.

Advanced Settings

Max record size (KB, uncompressed): Maximum size (KB) of each record batch before compression. Setting should be < message.max.bytes settings in Kafka brokers. Defaults to 768.

Max events per batch: Maximum number of events in a batch before forcing a flush. Defaults to 1000.

Flush period (sec): Maximum time between requests. Low settings could cause the payload size to be smaller than its configured maximum. Defaults to 1.

Internal Fields

Cribl LogStream uses a set of internal fields to assist in forwarding data to a Destination.

Fields for this Destination:

  • __topicOut
  • __key
  • __headers
  • __keySchemaIdOut
  • __valueSchemaIdOut

Updated 24 days ago

Azure Event Hubs


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.