Google Chronicle

Cribl Stream supports sending data to Google Chronicle, a cloud service for retaining, analyzing, and searching enterprise security and network telemetry data.

To define a Google Chronicle Destination, you need to obtain an API key from Google. If you want Cribl Stream or an external KMS to manage the API key, configure a key pair that references the API key.

Type: Streaming | TLS Support: Yes | PQ Support: Yes

Configuring Cribl Stream to Output to Chronicle

From the top nav, click Manage, then select a Worker Group to configure. Next, you have two options:

To configure via the graphical QuickConnect UI, click Routing > QuickConnect (Stream) or Collect (Edge). Next, click Add Destination at right. From the resulting drawer’s tiles, select Google Cloud > Chronicle Next, click either Add Destination or (if displayed) Select Existing. The resulting drawer will provide the options below.

Or, to configure via the Routing UI, click Data > Destinations (Stream) or More > Destinations (Edge). From the resulting page’s tiles or the Destinations left nav, select Google Cloud > Chronicle. Next, click Add Destination to open a New Destination modal that provides the options below.

General Settings

Output ID: Enter a unique name to identify this Chronicle output definition. If you clone this Destination, Cribl Stream will add -CLONE to the original Output ID.

Default log type: Select an application log type to send to Chronicle. (Google Chronicle expects all batches for a given Destination to have the same log type.) Can be overwritten by the __logType event field, and interacts with the optional Custom log types controls below. See the Google documentation for a current list of supported log types.

Optional Settings

Custom log types: In Cribl Stream 4.0 and later, you can use the Add type button here to define each desired type. Each will be a table row with Log type and Description fields. If you set the Default log type drop-down above to the value Custom, Cribl Stream will automatically select the first custom log type in this table as the default log type. Use the grab handles at left to reorder table rows.

Namespace: Optionally, configure an environment namespace to identify logs’ originating data domain. You can use this as a tag when indexing and/or enriching data. The __namespace event field, if present, will overwrite this.

Customer ID: A unique identifier (UUID) corresponding to a particular Chronicle instance. To use this optional field, request the ID from your Chronicle representative.

Send events as: Unstructured is the only currently supported format. Cribl plans to add UDM (Unified Data Model) support in a future release.

Log text field: Specify the event field that contains the log text to send. If you do not specify a log text field, Cribl Stream sends a JSON representation of the whole event.

Region: From the drop-down, choose the Google Chronicle regional endpoint to send events to.

Backpressure behavior: Whether to block, drop, or queue events when all receivers are exerting backpressure. (Causes might include a broken or denied connection, or a rate limiter.) Defaults to Block.

Tags: Optionally, add tags that you can use to filter and group Destinations in Cribl Stream’s Manage Destinations page. These tags aren’t added to processed events. Use a tab or hard return between (arbitrary) tag names.

Authentication

The Google Chronicle API key is required to complete this part of the Destination definition. If you clone this Destination, Cribl Stream will add -CLONE to the original Output ID.

Use the Authentication method drop-down to select one of these options:

  • Manual: In the resulting API key field, enter your Google Chronicle API key.

  • Secret: This option exposes a Secret drop-down, in which you can select a stored secret that references your Google Chronicle API key. A Create link is available to store a new, reusable secret.

Persistent Queue Settings

This section is displayed when the Backpressure behavior is set to Persistent Queue.

Max file size: The maximum data volume to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to 1 MB.

Max queue size: The maximum amount of disk space the queue is allowed to consume. Once this limit is reached, queueing is stopped and data blocking is applied. Enter a numeral with units of KB, MB, etc.

Queue file path: The location for the persistent queue files. This will be of the form: your/path/here/<worker-id>/<output-id>. Defaults to: $CRIBL_HOME/state/queues.

Compression: Codec to use to compress the persisted data, once a file is closed. Defaults to None; Gzip is also available.

Queue-full behavior: Whether to block or drop events when the queue is exerting backpressure (because disk is low or at full capacity). Block is the same behavior as non-PQ blocking, corresponding to the Block option on the Backpressure behavior drop-down. Drop new data throws away incoming data, while leaving the contents of the PQ unchanged.

Clear Persistent Queue: Click this “panic” button if you want to delete the files that are currently queued for delivery to this Destination. A confirmation modal will appear - because this will free up disk space by permanently deleting the queued data, without delivering it to downstream receivers. (Appears only after Output ID has been defined.)

Strict ordering: The default Yes position enables FIFO (first in, first out) event forwarding. When receivers recover, Cribl Stream will send earlier queued events before forwarding newly arrived events. To instead prioritize new events before draining the queue, toggle this off. Doing so will expose this additional control:

  • Drain rate limit (EPS): Optionally, set a throttling rate (in events per second) on writing from the queue to receivers. (The default 0 value disables throttling.) Throttling the queue’s drain rate can boost the throughput of new/active connections, by reserving more resources for them. You can further optimize Workers’ startup connections and CPU load at Group Settings > Worker Processes.

Processing Settings

Post-Processing

Pipeline: In this section’s Pipeline drop-down list, you can select a single existing Pipeline to process data from this input before the data is sent through the Routes.

System fields: Specify any fields you want Cribl Stream to automatically add to events using this output. Wildcards are supported. By default, includes cribl_pipe (identifying the Cribl Stream Pipeline that processed the event). Other options include:

  • cribl_host – Cribl Stream Node that processed the event.
  • cribl_input – Cribl Stream Source that processed the event.
  • cribl_output – Cribl Stream Destination that processed the event.
  • cribl_route – Cribl Stream Route (or QuickConnect) that processed the event.
  • cribl_wp – Cribl Stream Worker Process that processed the event.

Retries

Honor Retry-After header: Whether to honor a Retry-After header, provided that the header specifies a delay no longer than 180 seconds. Cribl Stream limits the delay to 180 seconds even if the Retry-After header specifies a longer delay. When enabled, any Retry-After header received takes precedence over all other options configured in the Retries section. When disabled, all Retry-After headers are ignored.

Settings for failed HTTP requests: When you want to automatically retry requests that receive particular HTTP response status codes, use these settings to list those response codes.

For any HTTP response status codes that are not explicitly configured for retries, Cribl Stream applies the following rules:

Status CodeAction
Greater than or equal to 400 and less than or equal to 500.Drop the request.
Greater than 500.Retry the request.

Upon receiving a response code that’s on the list, Cribl Stream first waits for a set time interval called the Pre-backoff interval and then begins retrying the request. Time between retries increases based on an exponential backoff algorithm whose base is the Backoff multiplier, until the backoff multiplier reaches the Backoff limit (ms). At that point, Cribl Stream continues retrying the request without increasing the time between retries any further.

By default, 429 (Too Many Requests) and 503 (Service Unavailable) are the only response codes configured for automatic retries. For each response code you want to add to the list, click Add Setting and configure the following settings:

  • HTTP status code: A response code that indicates a failed request, for example 429 (Too Many Requests) or 503 (Service Unavailable).
  • Pre-backoff interval (ms): The amount of time to wait before beginning retries, in milliseconds. Defaults to 30000 (30 seconds).
  • Backoff multiplier: The base for the exponential backoff algorithm; defaults to 1. A value of 2 means that Cribl Stream will retry after 2 seconds, then 4 seconds, then 8 seconds, and so on.
  • Backoff limit (ms): The maximum backoff interval Cribl Stream should apply for its final retry, in milliseconds. Defaults to 30000 (30 seconds); minimum is 10,000 (10 seconds); maximum is 180,000 (180 seconds, or 3 minutes).

Retry timed-out HTTP requests: When you want to automatically retry requests that have timed out, toggle this control on to display the following settings for configuring retry behavior:

  • Pre-backoff interval (ms): The amount of time to wait before beginning retries, in milliseconds. Defaults to 1000 (one second).
  • Backoff multiplier: The base for the exponential backoff algorithm. A value of 2 (the default) means that Cribl Stream will retry after 2 seconds, then 4 seconds, then 8 seconds, and so on.
  • Backoff limit (ms): The maximum backoff interval Cribl Stream should apply for its final retry, in milliseconds. Default (and minimum) is 10,000 (10 seconds); maximum is 180,000 (180 seconds, or 3 minutes).

Advanced Settings

Validate server certs: Toggle to Yes to reject certificates that are not authorized by a CA in the CA certificate path, nor by another trusted CA (e.g., the system’s CA).

Round-robin DNS: Toggle to Yes to use round-robin DNS lookup across multiple IPv6 addresses. When a DNS server returns multiple addresses, this will cause Cribl Stream to cycle through them in the order returned.

Compress: Compresses the payload body before sending. Defaults to Yes (recommended).

Request timeout: Enter an amount of time, in seconds, to wait for a request to complete before aborting it.

Request concurrency: Enter the maximum number of ongoing requests to allow before blocking.

Max body size (KB): Maximum size of the request body before compression. Defaults to 1024 KB. The actual request body size might exceed the specified value because the Destination adds bytes when it writes to the downstream receiver. Cribl recommends that you experiment with the Max body size value until downstream receivers reliably accept all events.

Max events per request: Enter the maximum number of events to include in the request body. Defaults to 0 (unlimited).

Flush period (sec): Enter the maximum time to allow between requests. Be aware that small values could cause the payload size to be smaller than the configured Max body size.

Extra HTTP Headers: Click Add Header to insert extra headers as Name/Value pairs. Values will be sent encrypted.

Failed request logging mode: Use this drop-down to determine which data should be logged when a request fails. Select among None (the default), Payload, or Payload + Headers. With this last option, Cribl Stream will redact all headers, except non-sensitive headers that you declare below in Safe headers.

Safe headers: Add headers to declare them as safe to log in plaintext. (Sensitive headers such as authorization will always be redacted, even if listed here.) Use a tab or hard return to separate header names.

Environment: If you’re using GitOps, optionally use this field to specify a single Git branch on which to enable this configuration. If empty, the config will be enabled everywhere.

Proxying Requests

If you need to proxy HTTP/S requests, see System Proxy Configuration.