Skip to main content
Version: 3.2

Kafka

Cribl LogStream supports sending data to a Kafka topic. Kafka is a streaming Destination type.

Configuring Cribl LogStream to Output to Kafka

In the QuickConnect UI: Click + Add beside Destinations. From the resulting drawer's tiles, select Kafka. Next, click either + Add New or (if displayed) Select Existing. The resulting drawer will provide the following options and fields.

Or, in the Data Routes UI: From the top nav of a LogStream instance or Group, select Data > Destinations. From the resulting page's tiles or the Destinations left nav, select Kafka. Next, click + Add New to open a New Destination modal that provides the following options and fields.

General Settings

Output ID: Enter a unique name to identify this Kafka definition.

Brokers: List of Kafka brokers to connect to. (E.g., localhost:9092.)

Topic: The topic on which to publish events. Can be overwritten using event's __topic field.

Acknowledgments: Select the number of required acknowledgments. Defaults to Leader.

Record data format: Format to use to serialize events before writing to Kafka. Defaults to JSON.

Compression: Codec to compress the data before sending to Kafka. Select None, Gzip, or Snappy.

Backpressure behavior: Select whether to block, drop, or queue incoming events when all receivers are exerting backpressure. Defaults to Block.

Persistent Queue Settings

This section is displayed when the Backpressure behavior is set to Persistent Queue.

Max file size: The maximum size to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to 1 MB.

Max queue size: The maximum amount of disk space the queue is allowed to consume. Once this limit is reached, queueing is stopped and data blocking is applied. Enter a numeral with units of KB, MB, etc.

Queue file path: The location for the persistent queue files. This will be of the form: your/path/here/<worker-id>/<output-id>. Defaults to: $CRIBL_HOME/state/queues.

Compression: Codec to use to compress the persisted data, once a file is closed. Defaults to None; Gzip is also available.

Queue-full behavior: Whether to block or drop events when the queue is exerting backpressure (because disk is low or at full capacity). Block is the same behavior as non-PQ blocking, corresponding to the Block option on the Backpressure behavior drop-down.Drop new data throws away incoming data, while leaving the contents of the PQ unchanged.

TLS Settings (Client Side)

Enabled Defaults to No. When toggled to Yes:

Validate server certs: Toggle to Yes to reject certificates that are not authorized by a CA in the CA certificate path, nor by another trusted CA (e.g., the system's CA).

Server name (SNI): Server name for the SNI (Server Name Indication) TLS extension. This must be a host name, not an IP address.

Certificate name: The name of the predefined certificate.

CA certificate path: Path on client containing CA certificates (in PEM format) to use to verify the server's cert. Path can reference $ENV_VARS.

Private key path (mutual auth): Path on client containing the private key (in PEM format) to use. Path can reference $ENV_VARS. Use only if mutual auth is required.

Certificate path (mutual auth): Path on client containing certificates in (PEM format) to use. Path can reference $ENV_VARS. Use only if mutual auth is required.

Passphrase: Passphrase to use to decrypt private key.

Minimum TLS version: Optionally, select the minimum TLS version to use when connecting.

Maximum TLS version: Optionally, select the maximum TLS version to use when connecting.

Authentication

This section governs SASL (Simple Authentication and Security Layer) authentication to use when connecting to brokers. Using TLS is highly recommended.

Enabled: Defaults to No. When toggled to Yes:

SASL mechanism: Use this drop-down to select the SASL authentication mechanism to use. The mechanism you select determines the controls displayed below.

PLAIN, SCRAM-256, or SCRAM-512

With any of these authentication mechanisms, select one of the following buttons:

Manual: Displays Username and Password fields to enter your Kafka credentials directly.

Secret: This option exposes a Credentials secret drop-down in which you can select a stored text secret that references your Kafka credentials. A Create link is available to store a new, reusable secret.

GSSAPI/Kerberos

Selecting Kerberos as the authentication mechanism displays the following options:

Keytab location: Enter the location of the key table file for the authentication principal.

Principal: Enter the authentication principal, e.g.: kafka_user@example.com.

Broker service class: Enter the Kerberos service class for Kafka brokers, e.g.: kafka.

Schema Registry

This section governs Kafka Schema Registry Authentication for AVRO-encoded data with a schema stored in the Confluent Schema Registry.

Enabled: defaults to No. When toggled to Yes:

  • Schema registry URL: URL for access to the Confluent Schema Registry. (E.g., http://<hostname>:8081.)

  • Default key schema ID: Used when __keySchemaIdOut is not present to transform key values. Leave blank if key transformation is not required by default.

  • Default value schema ID: Used when __valueSchemaIdOut not present to transform _raw. Leave blank if value transformation is not required by default.

  • TLS enabled: defaults to No. When toggled to Yes, displays the following TLS settings for the Schema Registry:

    TLS Settings (Schema Registry)

    These have the same format as the TLS Settings (Client Side) above.

  • Validate server certs: Require client to reject any connection that is not authorized by a CA in the CA certificate path, or by another trusted CA (e.g., the system's CA). Defaults to No.

  • Server name (SNI): Server name for the SNI (Server Name Indication) TLS extension. This must be a host name, not an IP address.

  • Certificate name: The name of the predefined certificate.

  • CA certificate path: Path on client containing CA certificates (in PEM format) to use to verify the server's cert. Path can reference $ENV_VARS.

  • Private key path (mutual auth): Path on client containing the private key (in PEM format) to use. Path can reference $ENV_VARS. Use only if mutual auth is required.

  • Certificate path (mutual auth): Path on client containing certificates in (PEM format) to use. Path can reference $ENV_VARS. Use only if mutual auth is required.

  • Passphrase: Passphrase to use to decrypt private key.

Minimum TLS version: Optionally, select the minimum TLS version to use when connecting.

Maximum TLS version: Optionally, select the maximum TLS version to use when connecting.

Processing Settings

Post‑Processing

Pipeline: Pipeline to process data before sending the data out using this output.

System fields: A list of fields to automatically add to events that use this output. By default, includes cribl_pipe (identifying the LogStream Pipeline that processed the event). Supports wildcards. Other options include:

  • cribl_host – LogStream Node that processed the event.
  • cribl_wp – LogStream Worker Process that processed the event.
  • cribl_input – LogStream Source that processed the event.
  • cribl_output – LogStream Destination that processed the event.

Advanced Settings

Max record size (KB, uncompressed): Maximum size (KB) of each record batch before compression. Setting should be < message.max.bytes settings in Kafka brokers. Defaults to 768.

Max events per batch: Maximum number of events in a batch before forcing a flush. Defaults to 1000.

Flush period (sec): Maximum time between requests. Low values could cause the payload size to be smaller than its configured maximum. Defaults to 1.

Environment: If you're using GitOps, optionally use this field to specify a single Git branch on which to enable this configuration. If empty, the config will be enabled everywhere.

Internal Fields

Cribl LogStream uses a set of internal fields to assist in forwarding data to a Destination.

Fields for this Destination:

  • __topicOut
  • __key
  • __headers
  • __keySchemaIdOut
  • __valueSchemaIdOut