Skip to main content
Version: 3.2

Grafana Cloud

Cribl LogStream can send data to two of the services available in Grafana Cloud: Loki for logs and Prometheus for metrics. The Grafana Cloud Destination shapes events appropriately for Loki and Prometheus, and routes events to the correct endpoint for each service. This is a streaming Destination type.

Preparing Prometheus and Loki to Receive Data from LogStream

To define a Grafana Cloud Destination, you need a Grafana Cloud account.

While logged in to your Grafana account, navigate to the Grafana Cloud Portal, which should be located at<your-organization-name>, and complete the following steps.

Obtain an API key, setting its Role to MetricsPublisher. If you want LogStream or an external KMS to manage the API key, configure a key pair that references the API key.

In the Prometheus tile, click Send Metrics to open the Prometheus configuration page. Write down:

  • Your Remote Write Endpoint URL, for example:
  • Your Prometheus Username.

In the Loki tile, click Send Logs to open the Loki configuration page. Write down:

  • Your Grafana Data Source settings URL, for example:
  • Your Loki User ID.

Decide what type of authentication to use and prepare accordingly:

  • If you choose Basic authentication, the username (Username in Prometheus, User in Loki) and password (simply your Grafana API key) will remain separate.

  • If you choose token-based authentication, construct your tokens by concatenating username, colon (:), and password, for example 12345:cOQvDj6sJGFS3Bk2MguBW==. Because the Prometheus and Loki usernames differ, you need to construct a separate token for each service.

Configuring Cribl LogStream to Output to Grafana Cloud

In the QuickConnect UI: Click + Add beside Destinations. From the resulting drawer's tiles, select Grafana Cloud. Next, click either + Add New or (if displayed) Select Existing. The resulting drawer will provide the following options and fields.

Or, in the Data Routes UI: From the top nav of a LogStream instance or Group, select Data > Destinations. From the resulting page's tiles or the Destinations left nav, select Grafana Cloud. Next, click + Add New to open a New Destination modal that provides the following options and fields.

General Settings

Output ID: Enter a unique name to identify this Grafana Cloud output definition.

Loki URL: The endpoint to send log events to, e.g.: This is the Grafana Data Source settings URL you wrote down earlier.

Prometheus URL: The endpoint to send metric events to, e.g.: This is the Remote Write Endpoint URL you wrote down earlier.

Backpressure behavior: Whether to block, drop, or queue events when all receivers are exerting backpressure.

Persistent Queue Settings

This section is displayed when the Backpressure behavior is set to Persistent Queue.

Max file size: The maximum size to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to 1 MB.

Max queue size: The maximum amount of disk space the queue is allowed to consume. Once this limit is reached, queueing is stopped and data blocking is applied. Enter a numeral with units of KB, MB, etc.

Queue file path: The location for the persistent queue files. This will be of the form: your/path/here/<worker-id>/<output-id>. Defaults to: $CRIBL_HOME/state/queues.

Compression: Codec to use to compress the persisted data, once a file is closed. Defaults to None. Gzip is also available.

Queue-full behavior: Whether to block or drop events when the queue is exerting backpressure (because disk is low or at full capacity). Block is the same behavior as non-PQ blocking, corresponding to the Block option on the Backpressure behavior drop-down. Drop new data throws away incoming data, while leaving the contents of the PQ unchanged.


The Authentication tab provides separate Loki and Prometheus sections, enabling you to configure these inputs separately. The two sections provide identical options.

Use the Authentication method buttons to select one of these options:

  • Auth token: Enter the bearer token that must be included in the authorization header. Use the token that you constructed earlier. In Grafana Cloud, the bearer token is generally built by concatenating the username and the API key, separated by a colon. E.g.: <your-username>:<your-api-key>.

  • Auth token (text secret): This option exposes a drop-down in which you can select a stored text secret that references the bearer token described above. A Create link is available to store a new, reusable secret.

  • Basic: This default option displays fields for you to enter HTTP Basic authentication credentials. Username is the Loki User or Prometheus Username that you wrote down earlier. Password is your API key in the Grafana Cloud domain.

  • Basic (credentials secret): This option exposes a Credentials secret drop-down, in which you can select a stored text secret that references the Basic authentication credentials described above. A Create link is available to store a new, reusable secret.

Processing Settings

Metric events can have dimensions, and log events have labels. Dimensions, labels, and their values are determined by several different settings in LogStream. This section explains how that works, along with other kinds of settings.

Loki uses labels to define separate streams of logging data. This is a key concept. Cribl recommends that you familiarize yourself with the information and documentation Grafana provides about labels in Loki.

One canonical example is processing logs from servers in three environments: production, staging, and testing. You could create a label named env whose possible values are prod, staging, and test.

One basic principle is that if you set too many labels, you can end up with too many streams.


Pipeline: Pipeline to process data before sending the data out using this output.

System fields: A list of fields to automatically add to events that use this output—both metric events, as dimensions; and, log events, as labels. Supports wildcards.

By default, includes cribl_host (LogStream Node that processed the event) and cribl_wp (LogStream Worker Process that processed the event). On the Loki side, this creates different streams, which prevents Loki from rejecting some events as being out of order when different Nodes or Worker Processes are emitting at different rates.

Other options include:

  • cribl_pipe – LogStream Pipeline that processed the event.
  • cribl_input – LogStream Source that processed the event.
  • cribl_output – LogStream Destination that processed the event.

Advanced Settings

Validate server certs: Reject certificates that are not authorized by a CA in the CA certificate path, or by another trusted CA (e.g., the system's CA). Defaults to Yes.

Round-robin DNS: Toggle to Yes to use round-robin DNS lookup. When a DNS server returns multiple addresses, this will cause LogStream to cycle through them in the order returned.

Compress: When the Message format is JSON, you can toggle this slider to Yes to GZIP-compress the data before sending to Grafana Cloud. (Applies only to Loki's JSON payloads. This slider is hidden when the Message format is Protobuf, because both Prometheus' and Loki's Protobuf implementations are Snappy-compressed by default.)

Request timeout: Amount of time (in seconds) to wait for a request to complete before aborting it. Defaults to 30.

Request concurrency: Maximum number of concurrent requests before blocking. This is set per Worker Process. Defaults to 5.

Max body size (KB): Maximum size of the request body. Defaults to 4096 KB.

Max events per request: Maximum number of events to include in the request body. The 0 default allows unlimited events.

Loki and Prometheus might complain about entries being delivered out of order when Request concurrency is set > 1 and any of Flush period (sec), Max body size (KB), or Max events per request are set to low values.

Flush period (sec): Maximum time between requests. Low values could cause the payload size to be smaller than its configured maximum. Defaults to 1.

Extra HTTP headers: Name/Value pairs to pass as additional HTTP headers.

Metric renaming expression: A JavaScript expression that can be used to rename metrics. The default expression – name.replace(/\\./g, \'_\') – replaces all . characters in a metric's name with the Prometheus-supported _ character. Use the name global variable to access the metric's name. You can access event fields' values via __e.<fieldName>.

Message format: Whether to send events as Protobuf (the default) or JSON.

Logs message field: The event field to send as log output, for example: _raw. All other event fields are discarded. If left blank, LogStream sends a JSON representation of the whole event.

Logs labels: Name/value pairs where the value can be a static or dynamic expression that has access to all log event fields.

Internal Fields

Cribl LogStream uses a set of internal fields to assist in forwarding data to a Destination.

If an event contains the internal field __criblMetrics, LogStream will send it Prometheus as a metric event. If __criblMetrics is absent, LogStream will treat the event as a log and send it to Loki.

The internal field __labels specifies labels to add to log events. If a label is set in both the __labels field and in Logs labels and/or System fields, LogStream sends the value from __labels to Loki. Setting the __labels field in a Pipeline gives you a quick way to experiment with the logs being sent.

If there are no labels set (this would happen when System fields, Logs labels, and __labels are all empty), LogStream adds a default source label, which prevents Loki from rejecting events. The source label the concatenation of cribl, underscore (_), source type, colon (:), source-name, where source name and type are values in the __inputId event field, for example: cribl_metrics:in_prometheus_rw. If __inputId is missing, source is set to cribl.

Notes on HTTP-based Outputs

  • The Advanced Settings > Compress toggle determines whether to compress the payload body before sending to Loki only. The toggle setting does not apply to Prometheus payloads, which are always compressed using Snappy.

  • LogStream will attempt to use keepalives to reuse a connection for multiple requests. After 2 minutes of the first use, the connection will be thrown away, and a new one will be reattempted. This is to prevent sticking to a particular destination when there is a constant flow of events.

  • If the server does not support keepalives (or if the server closes a pooled connection while idle), a new connection will be established for the next request.

  • When resolving the Destination's hostname, LogStream will pick the first IP in the list for use in the next connection. Enable Round-robin DNS to better balance distribution of events between Grafana Cloud nodes.