Confluent Cloud

Cribl Stream supports sending data to Kafka topics on the Confluent Cloud managed Kafka platform.

Type: Streaming | TLS Support: Configurable | PQ Support: Yes

Confluent Cloud uses a binary protocol over TCP. It does not support HTTP proxies, so Cribl Stream must send events directly to receivers. You might need to adjust your firewall rules to allow this traffic.

Sending Kafka Topic Data to Confluent Cloud

From the top nav, click Manage, then select a Worker Group to configure. Next, you have two options:

To configure via the graphical QuickConnect UI, click Routing > QuickConnect (Stream) or Collect (Edge). Next, click Add Destination at right. From the resulting drawer’s tiles, select Confluent Cloud. Next, click either Add Destination or (if displayed) Select Existing. The resulting drawer will provide the options below.

Or, to configure via the Routing UI, click Data > Destinations (Stream) or More > Destinations (Edge). From the resulting page’s tiles or the Destinations left nav, select Confluent Cloud. Next, click Add Destination to open a New Destination modal that provides the options below.

General Settings

Output ID: Enter a unique name to identify this Destination definition.

Brokers: List of Confluent Cloud brokers to connect to. (E.g., myAccount.confluent.cloud:9092.)

Topic: The topic on which to publish events. Can be overwritten using event’s __topicOut field.

Optional Settings

Acknowledgments: Select the number of required acknowledgments. Defaults to Leader.

Record data format: Format to use to serialize events before writing to Kafka. Defaults to JSON. When set to Protobuf, the Protobuf Format Settings section (left tab) becomes available.

Compression: Codec to use to compress the data before sending to Kafka. Select None, Gzip, Snappy, or LZ4. Defaults to Gzip.

Backpressure behavior: Select whether to block, drop, or queue incoming events when all receivers are exerting backpressure. Defaults to Block.

Tags: Optionally, add tags that you can use to filter and group Destinations in Cribl Stream’s Manage Destinations page. These tags aren’t added to processed events. Use a tab or hard return between (arbitrary) tag names.

Persistent Queue Settings

This tab is displayed when the Backpressure behavior is set to Persistent Queue.

On Cribl-managed Cribl.Cloud Workers (with an Enterprise plan), this tab exposes only the destructive Clear Persistent Queue button (described below in this section). A maximum queue size of 1 GB disk space is automatically allocated per PQ‑enabled Destination, per Worker Process. The 1 GB limit is on outbound uncompressed data, and no compression is applied to the queue.

This limit is not configurable. If the queue fills up, Cribl Stream will block outbound data. To configure the queue size, compression, queue-full fallback behavior, and other options below, use a hybrid Group.

Max file size: The maximum data volume to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to 1 MB.

Max queue size: The maximum amount of disk space the queue is allowed to consume. Once this limit is reached, queueing is stopped and data blocking is applied. Enter a numeral with units of KB, MB, etc.

Queue file path: The location for the persistent queue files. This will be of the form: your/path/here/<worker‑id>/<output‑id>. Defaults to: $CRIBL_HOME/state/queues.

Compression: Codec to use to compress the persisted data, once a file is closed. Defaults to None; Gzip, Snappy, and LZ4 are also available.

Cribl strongly recommends enabling compression. Doing so improves Cribl Stream’s performance, enabling faster data transfer using less bandwidth.

Queue-full behavior: Whether to block or drop events when the queue is exerting backpressure (because disk is low or at full capacity). Block is the same behavior as non-PQ blocking, corresponding to the Block option on the Backpressure behavior drop-down. Drop new data throws away incoming data, while leaving the contents of the PQ unchanged.

Clear Persistent Queue: Click this “panic” button if you want to delete the files that are currently queued for delivery to this Destination. A confirmation modal will appear - because this will free up disk space by permanently deleting the queued data, without delivering it to downstream receivers. (Appears only after Output ID has been defined.)

Strict ordering: The default Yes position enables FIFO (first in, first out) event forwarding. When receivers recover, Cribl Stream will send earlier queued events before forwarding newly arrived events. To instead prioritize new events before draining the queue, toggle this off. Doing so will expose this additional control:

  • Drain rate limit (EPS): Optionally, set a throttling rate (in events per second) on writing from the queue to receivers. (The default 0 value disables throttling.) Throttling the queue’s drain rate can boost the throughput of new/active connections, by reserving more resources for them. You can further optimize Workers’ startup connections and CPU load at Group Settings > Worker Processes.

Calculating the Time PQ Will Take to Engage

PQ will not engage until Cribl Stream has exhausted all attempts to send events to the Kafka receiver. This can take several minutes if requests continue to fail or time out.

To calculate the longest possible time this can take, multiply the values of Advanced Settings > Request timeout and Max retries. For the default values (60 seconds and 5, respectively), this would be 60 seconds times 5 retries = 300 seconds, or 5 minutes.

TLS Settings (Client Side)

Enabled When toggled to Yes (the default):

Autofill?: This setting is experimental.

Validate server certs: Toggle to Yes to reject certificates that are not authorized by a CA in the CA certificate path, nor by another trusted CA (e.g., the system’s CA).

Server name (SNI): Leave this field blank. See Connecting to Kafka below.

Minimum TLS version: Optionally, select the minimum TLS version to use when connecting.

Maximum TLS version: Optionally, select the maximum TLS version to use when connecting.

Certificate name: The name of the predefined certificate.

CA certificate path: Path on client containing CA certificates (in PEM format) to use to verify the server’s cert. Path can reference $ENV_VARS.

Private key path (mutual auth): Path on client containing the private key (in PEM format) to use. Path can reference $ENV_VARS. Use only if mutual auth is required.

Certificate path (mutual auth): Path on client containing certificates in (PEM format) to use. Path can reference $ENV_VARS. Use only if mutual auth is required.

Passphrase: Passphrase to use to decrypt private key.

Authentication

This section documents the SASL (Simple Authentication and Security Layer) authentication settings to use when connecting to brokers. Using TLS is highly recommended.

Enabled: Defaults to No. When toggled to Yes:

  • SASL mechanism: Select the SASL (Simple Authentication and Security Layer) authentication mechanism to use. Defaults to PLAIN. SCRAM‑SHA‑256, SCRAM‑SHA‑512, and GSSAPI/Kerberos are also available. The mechanism you select determines the controls displayed below.

PLAIN, SCRAM-256, or SCRAM-512

With any of these SASL mechanisms, select one of the following buttons:

Manual: Displays Username and Password fields to enter your Kafka credentials directly.

Secret: This option exposes a Credentials secret drop-down in which you can select a stored text secret that references your Kafka credentials. A Create link is available to store a new, reusable secret.

GSSAPI/Kerberos

Selecting Kerberos as the authentication mechanism displays the following options:

Keytab location: Enter the location of the keytab file for the authentication principal.

Principal: Enter the authentication principal, e.g.: kafka_user@example.com.

Broker service class: Enter the Kerberos service class for Kafka brokers, e.g.: kafka.

You will also need to set up your environment and configure the Cribl Stream host for use with Kerberos. See Kafka Authentication with Kerberos for further detail.

Schema Registry

This section governs Kafka Schema Registry Authentication for Avro-encoded data with a schema stored in the Confluent Schema Registry.

Enabled: Defaults to No. When toggled to Yes, displays the following controls:

Schema registry URL: URL for access to the Confluent Schema Registry. (E.g., http://localhost:8081.)

Default key schema ID: Used when __keySchemaIdOut is not present to transform key values. Leave blank if key transformation is not required by default.

Default value schema ID: Used when __valueSchemaIdOut not present to transform _raw. Leave blank if value transformation is not required by default.

TLS enabled: defaults to No. When toggled to Yes, displays the following TLS settings for the Schema Registry (in the same format as the TLS Settings (Client Side) above):

  • Validate server certs: Require client to reject any connection that is not authorized by a CA in the CA certificate path, or by another trusted CA (e.g., the system’s CA). Defaults to No.

  • Server name (SNI): Server name for the SNI (Server Name Indication) TLS extension. This must be a host name, not an IP address.

    With a dedicated Confluent Cloud cluster hosted in Microsoft Azure, be sure to specify the Server name (SNI). If this is omitted, Confluent Cloud might reset the connection to Cribl Stream.

  • Minimum TLS version: Optionally, select the minimum TLS version to use when connecting.

  • Maximum TLS version: Optionally, select the maximum TLS version to use when connecting.

  • Certificate name: The name of the predefined certificate.

  • CA certificate path: Path on client containing CA certificates (in PEM format) to use to verify the server’s cert. Path can reference $ENV_VARS.

  • Private key path (mutual auth): Path on client containing the private key (in PEM format) to use. Path can reference $ENV_VARS. Use only if mutual auth is required.

  • Certificate path (mutual auth): Path on client containing certificates in (PEM format) to use. Path can reference $ENV_VARS. Use only if mutual auth is required.

  • Passphrase: Passphrase to use to decrypt private key.

Processing Settings

Post‑Processing

In this section’s Pipeline drop-down list, you can select a single existing Pipeline to process data before it is sent through this output.

System fields: A list of fields to automatically add to events that use this output. By default, includes cribl_pipe (identifying the Cribl Stream Pipeline that processed the event). Supports wildcards. Other options include:

  • cribl_host – Cribl Stream Node that processed the event.
  • cribl_input – Cribl Stream Source that processed the event.
  • cribl_output – Cribl Stream Destination that processed the event.
  • cribl_route – Cribl Stream Route (or QuickConnect) that processed the event.
  • cribl_wp – Cribl Stream Worker Process that processed the event.

Advanced Settings

Max record size (KB, uncompressed): Maximum size (KB) of each record batch before compression. Setting should be < message.max.bytes settings in Kafka brokers. Defaults to 768.

Max events per batch: Maximum number of events in a batch before forcing a flush. Defaults to 1000.

Flush period (sec): Maximum time between requests. Low values could cause the payload size to be smaller than its configured maximum. Defaults to 1.

Connection timeout (ms): Maximum time to wait for a successful connection. Defaults to 10000 ms, i.e., 10 seconds. Valid range is 1000 to 3600000 ms, i.e., 1 second to 1 hour.

Request timeout (ms): Maximum time to wait for a successful request. Defaults to 60000 ms, i.e., 1 minute.

Max retries: Maximum number of times to retry a failed request before the message fails. Defaults to 5; enter 0 to not retry at all.

Authentication timeout (ms): Maximum time to wait for Kafka to respond to an authentication request. Defaults to 1000 (1 second).

Reauthentication threshold (ms): If the broker requires periodic reauthentication, this setting defines how long before the reauthentication timeout Cribl Stream initiates the reauthentication. Defaults to 10000 (10 seconds).

A small value for this setting, combined with high network latency, might prevent the Destination from reauthenticating before the Kafka broker closes the connection.

A large value might cause the Destination to send reauthentication messages too soon, wasting bandwidth.

The Kafka setting connections.max.reauth.ms controls the reuthentication threshold on the Kafka side.

Environment: If you’re using GitOps, optionally use this field to specify a single Git branch on which to enable this configuration. If empty, the config will be enabled everywhere.

Protobuf Format Settings

Definition set: From the drop-down, select OpenTelemetry. This makes the Object type setting available.

Object type: From the drop-down, select Logs, Metrics, or Traces.

Working with Protobufs

This Destination supports supports Binary Protobuf payload encoding. The Protobufs it sends can encode traces, metrics, or logs, the three types of telemetry data defined in the OpenTelemetry Project’s Data Sources documentation:

  • A trace tracks the progression of a single request.
    • Each trace is a tree of spans.
    • A span object represents the work being done by the individual services, or components, involved in a request as that request flows through a system.
  • A metric provides aggreggated statistical information.
    • A metric contains individual measurements called data points.
  • A log, in OpenTelemetry terms, is “any data that is not part of a distributed trace or a metric”.

When configuring Pipelines (including pre-processing and post-processing Pipelines), you need to ensure that events sent to this Destination conform to the relevant Protobuf specification:

This Destination will drop non-conforming events. If the Destination encounters a parsing error when trying to convert an event to OTLP, it discards the event and Cribl Stream logs the error.

Connecting to Kafka

Leave the TLS Settings > Server name (SNI) field blank

In Cribl Stream’s Kafka-based Sources and Destinations (including this one), the Kafka library that Cribl Stream uses manages SNI (Server Name Indication) without any input from Cribl Stream. Therefore, you should leave the TLS Settings > Server name (SNI) field blank.

Setting this field in the Cribl Stream UI can cause traffic to be routed to the wrong brokers, because it interferes with the Kafka library’s operation. This is especially important for Confluent Cloud Dedicated clusters, which rely on SNI – as managed by the Kafka library – for routing.

Connecting to a Kafka cluster entails working with hostnames for brokers and bootstrap servers.

Brokers are servers that comprise the storage layer in a Kafka cluster. Bootstrap servers handle the initial connection to the Kafka cluster, and then return the list of brokers. A broker list can run into the hundreds. Every Kafka cluster has a bootstrap.servers property, defined as either a single hostname:port K-V pair, or a list of them. If Cribl Stream tries to connect via one bootstrap server and that fails, Cribl Stream then tries another one on the list.

In the General Settings > Brokers list, you can enter either the hostnames of brokers that your Kafka server has been configured to use, or, the hostnames of one or more bootstrap servers. If Kafka returns a list of brokers that’s longer than the list you entered, Cribl Stream keeps the full list internally. Cribl Stream neither saves the list nor makes it available in the UI. The connection process simply starts at the beginning whenever the Source or Destination is started.

Here’s an overview of the connection process:

  1. From the General Settings > Brokers list – where each broker is listed as a hostname and port – Cribl Stream takes a hostname and resolves it to an IP address.

  2. Cribl Stream makes a connection to that IP address. Notwithstanding the fact that Cribl Stream resolved one particular hostname to that IP address, there may be many services running at that IP address – each with its own distinct hostname.

  3. Cribl Stream establishes TLS security for the connection.

Although SNI is managed by the Kafka library rather than in the Cribl Stream UI, you might want to know how it fits into the connection process. The purpose of the SNI is to specify one hostname – i.e., service – among many that might be running on a given IP address within a Kafka cluster. Excluding the other services is one way that TLS makes the connection more secure.

Internal Fields

Cribl Stream uses a set of internal fields to assist in forwarding data to a Destination.

Fields for this Destination:

  • __topicOut
  • __key
  • __headers
  • __keySchemaIdOut
  • __valueSchemaIdOut