Home / Stream/ Destinations/ Azure·Azure Event Hubs

Azure Event Hubs

Cribl Stream supports sending data to Azure Event Hubs.

Type: Streaming | TLS Support: Configurable | PQ Support: Yes

Azure Event Hubs uses a binary protocol over TCP. It does not support HTTP proxies, so Cribl Stream must send events directly to receivers. You might need to adjust your firewall rules to allow this traffic.

Configuring Cribl Stream to Output to Azure Event Hubs

From the top nav, click Manage, then select a Worker Group to configure. Next, you have two options:

To configure via the graphical QuickConnect UI, click Routing > QuickConnect (Stream) or Collect (Edge). Next, click + Add Destination at right. From the resulting drawer’s tiles, select Azure > Events Hub. Next, click either + Add Destination or (if displayed) Select Existing. The resulting drawer will provide the options below.

Or, to configure via the Routing UI, click Data > Destinations (Stream) or More > Destinations (Edge). From the resulting page’s tiles or the Destinations left nav, select Azure > Events Hub. Next, click New Destination to open a New Destination modal that provides the options below.

General Settings

Output ID: Enter a unique name to identify this Azure Event Hubs definition.

Brokers: List of Event Hub Kafka brokers to connect to. (For example, yourdomain.servicebus.windows.net:9093.) Find the hostname in Shared Access Policies, in the host portion of the primary or secondary connection string.

Event Hub name: The name of the Event Hub (a.k.a., Kafka Topic) on which to publish events. Can be overwritten using the __topicOut field.

Optional Settings

** Acknowledgments**: Control the number of required acknowledgments. Defaults to Leader.

Record data format: Format to use to serialize events before writing to the Event Hub Kafka brokers. Defaults to JSON.

Backpressure behavior: Whether to block, drop, or queue events when all receivers are exerting backpressure. Defaults to Block.

Tags: Optionally, add tags that you can use for filtering and grouping at the final destination. Use a tab or hard return between (arbitrary) tag names.

Persistent Queue Settings

This tab is displayed when the Backpressure behavior is set to Persistent Queue.

On Cribl-managed Cribl.Cloud Workers (with an Enterprise plan), this tab exposes only the Clear Persistent Queue button. A maximum queue size of 1 GB disk space is automatically allocated per Worker Process.

Max file size: The maximum data volume to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to 1 MB.

Max queue size: The maximum amount of disk space the queue is allowed to consume. Once this limit is reached, Cribl Stream stops queueing and applies the fallback Queue‑full behavior. Enter a numeral with units of KB, MB, etc.

Queue file path: The location for the persistent queue files. Defaults to $CRIBL_HOME/state/queues. To this value, Cribl Stream will append /<worker‑id>/<output‑id>.

Compression: Codec to use to compress the persisted data, once a file is closed. Defaults to None; Gzip is also available.

Queue-full behavior: Whether to block or drop events when the queue is exerting backpressure (because disk is low or at full capacity). Block is the same behavior as non-PQ blocking, corresponding to the Block option on the Backpressure behavior drop-down. Drop new data throws away incoming data, while leaving the contents of the PQ unchanged.

Clear persistent queue: Click this button if you want to flush out files that are currently queued for delivery to this Destination. A confirmation modal will appear. (Appears only after Output ID has been defined.)

TLS Settings (Client Side)

Enabled Defaults to Yes.

Validate server certs: Defaults to Yes.

Authentication

Authentication parameters to use when connecting to brokers. Using TLS is highly recommended.

Enabled: Defaults to Yes. (Toggling to No hides the remaining settings in this group.)

SASL mechanism: SASL (Simple Authentication and Security Layer) authentication mechanism to use, PLAIN is the only mechanism currently supported for Event Hub Kafka brokers.

Username: The username for authentication. For Event Hub, this should always be $ConnectionString.

Use the Authentication method buttons to select one of these options:

  • Manual: Displays Username and Password fields for you to enter HTTP Basic authentication credentials. The password is your Event Hubs primary or secondary connection string. From Microsoft’s documentation, the format is:
    Endpoint=sb://<FQDN>/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<KeyValue>

    Example entry: Endpoint=sb://dummynamespace.servicebus.windows.net/;SharedAccessKeyName=dummyaccesskeyname;SharedAccessKey=5dOntTRytoC24opYThisAsit3is2B+OGY1US/fuL3ly=

  • Secret: This option exposes a Password (text secret) drop-down, in which you can select a stored secret that references the credentials described above. A Create link is available to store a new, reusable secret.

Processing Settings

Post‑Processing

Pipeline: Pipeline to process data before sending the data out using this output.

System fields: A list of fields to automatically add to events that use this output. By default, includes cribl_pipe (identifying the Cribl Stream Pipeline that processed the event). Supports wildcards. Other options include:

  • cribl_host – Cribl Stream Node that processed the event.
  • cribl_wp – Cribl Stream Worker Process that processed the event.
  • cribl_input – Cribl Stream Source that processed the event.
  • cribl_output – Cribl Stream Destination that processed the event.

Advanced Settings

Max record size (KB, uncompressed): Maximum size (KB) of each record batch before compression. Setting should be < message.max.bytes settings in Kafka brokers. Defaults to 768.

Max events per batch: Maximum number of events in a batch before forcing a flush. Defaults to 1000.

Flush period (sec): Maximum time between requests. Low settings could cause the payload size to be smaller than its configured maximum. Defaults to 1.

Connection timeout (ms): Maximum time to wait for a successful connection. Defaults to 10000 ms (10 seconds). Valid range is 1000 to 3600000 ms (1 second to 1 hour).

Request timeout (ms): Maximum time to wait for a successful request. Defaults to 60000 ms (1 minute).

Max retries: Maximum number of times to retry a failed request before the message fails. Defaults to 5; enter 0 to not retry at all.

Authentication timeout (ms): Maximum time to wait for Kafka to respond to an authentication request. Defaults to 1000 (1 second). This setting is available starting in Cribl Stream 4.0.4.

Reauthentication threshold (ms): If the broker requires periodic reauthentication, this setting defines how long before the reauthentication timeout Cribl Stream initiates the reauthentication. Defaults to 10000 (10 seconds). This setting is available starting in Cribl Stream 4.0.4.

A small value for this setting, combined with high network latency, might prevent the Destination from reauthenticating before the Kafka broker closes the connection.

A large value might cause the Destination to send reauthentication messages too soon, wasting bandwidth.

The Kafka setting connections.max.reauth.ms controls the reuthentication threshold on the Kafka side.

Environment: If you’re using GitOps, optionally use this field to specify a single Git branch on which to enable this configuration. If empty, the config will be enabled everywhere.

Internal Fields

Cribl Stream uses a set of internal fields to assist in forwarding data to a Destination.

Fields for this Destination:

  • __topicOut
  • __key
  • __headers
  • __keySchemaIdOut
  • __valueSchemaIdOut