These docs are for Cribl Stream 4.6 and are no longer actively maintained.
See the latest version (4.13).
Cribl Lake Destination
The Cribl Lake Destination sends data to Cribl Lake and automatically selects a partitioning scheme that works well with Cribl Search.
Type: Non-Streaming | TLS Support: Yes | PQ Support: No
The Cribl Lake Destination is available only in Cribl.Cloud, and only in cloud-based Worker Groups.
Configure a Cribl Lake Destination
From the top nav, click Manage, then select a Worker Group to configure. Next, you have two options:
- To configure via the graphical QuickConnect UI:
- Click Routing, then QuickConnect.
- Click Add Destination at right.
- From the resulting drawer’s tiles, select Data Lakes, then Cribl Lake.
- Click either Add Destination or (if displayed) Select Existing.
- Or, to configure via the Routing UI:
- Click Data, then Destinations.
- From the resulting page’s tiles or the Destinations left nav, select Data Lakes, then Cribl Lake.
- Click Add Destination to open a New Destination modal.
- To configure via the graphical QuickConnect UI:
In the Destination modal, configure the following under General Settings:
- Output ID: Enter a unique name to identify this Cribl Lake Destination.
- Lake dataset: Select Cribl Lake Dataset to send data to.
You can’t target the built-in
cribl_logs
andcribl_metrics
Datasets with this Destination.
Next, you can configure the following Optional Settings that you’ll find across many Cribl Destinations:
- Backpressure behavior: Whether to block or drop events when all receivers are exerting backpressure. (Causes might include an accumulation of too many files needing to be closed.) Defaults to
Block
. - Tags: Optionally, add tags that you can use to filter and group Destinations in Cribl Stream’s Manage Destinations page. These tags aren’t added to processed events. Use a tab or hard return between (arbitrary) tag names.
- Backpressure behavior: Whether to block or drop events when all receivers are exerting backpressure. (Causes might include an accumulation of too many files needing to be closed.) Defaults to
Optionally, configure any Post-Processing settings outlined in the below sections.
Click Save, then Commit & Deploy.
Verify that data is searchable in Cribl Lake.
Processing Settings
Post‑Processing
Pipeline: Pipeline to process data before sending the data out using this output.
System fields: A list of fields to automatically add to events that use this output. By default, includes cribl_pipe
(identifying the Cribl Stream Pipeline that processed the event). Supports c*
wildcards. Other options include:
cribl_host
– Cribl Stream Node that processed the event.cribl_input
– Cribl Stream Source that processed the event.cribl_output
– Cribl Stream Destination that processed the event.cribl_route
– Cribl Stream Route (or QuickConnect) that processed the event.cribl_wp
– Cribl Stream Worker Process that processed the event.