Cribl LogStream supports sending data to Google Cloud Pub/Sub, a managed real-time messaging service for sending and receiving messages between applications. This is a streaming Destination type.
Configuring Cribl LogStream to Output to Pub/Sub
In the QuickConnect UI: Click + Add beside Destinations. From the resulting drawer's tiles, select Google Cloud > Pub/Sub. Next, click either + Add New or (if displayed) Select Existing. The resulting drawer will provide the following options and fields.
Or, in the Data Routes UI: From the top nav of a LogStream instance or Group, select Data > Destinations. From the resulting page's tiles or the Destinations left nav, select Google Cloud > Pub/Sub. Next, click + Add New to open a New Destination modal that provides the following options and fields.
Output ID: Enter a unique name to identify this Pub/Sub output definition.
Topic ID: ID of the Pub/Sub topic to send events to.
Create topic: Toggle to
Yes if you want LogStream to create the topic on Pub/Sub if it does not exist.
Ordered delivery: Toggle to
Yes if you want LogStream to send events in the order that they arrived in the queue. (For this to work correctly, the process receiving events must have ordering enabled.)
Region: Region to publish messages to. Select
default to allow Google to auto-select the nearest region. (If you've enabled Ordered delivery, the selected region must be allowed by message storage policy.)
Backpressure behavior: Whether to block, drop, or queue events when all receivers are exerting backpressure. Defaults to
Persistent Queue Settings
This section is displayed when the Backpressure behavior is set to Persistent Queue.
Max file size: The maximum size to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to
Max queue size: The maximum amount of disk space the queue is allowed to consume. Once this limit is reached, queueing is stopped, and data blocking is applied. Enter a numeral with units of KB, MB, etc.
Queue file path: The location for the persistent queue files. This will be of the form:
your/path/here/<worker-id>/<output-id>. Defaults to
Compression: Codec to use to compress the persisted data, once a file is closed. Defaults to
Gzip is also available.
Queue-full behavior: Whether to block or drop events when the queue is exerting backpressure (because disk is low or at full capacity). Block is the same behavior as non-PQ blocking, corresponding to the Block option on the Backpressure behavior drop-down. Drop new data throws away incoming data, while leaving the contents of the PQ unchanged.
Use the Authentication Method buttons to select one of these options:
Auto: This option uses the environment variables
PUBSUB_CREDENTIALS, and requires no configuration here.
Manual: This default option displays a Service account credentials field for you to enter the contents of your service account credentials file (a set of JSON keys), as downloaded from Google Cloud.
To insert the file itself, click the upload button at this field's upper right. As an alternative, you can use environment variables, as outlined here.
Secret: This option exposes a drop-down in which you can select a stored secret that references the service account credentials described above. A Create link is available to store a new, reusable secret.
Pipeline: Pipeline to process data before sending the data out using this output.
System fields: A list of fields to automatically add to events that use this output. By default, includes
cribl_pipe (identifying the LogStream Pipeline that processed the event). Supports wildcards. Other options include:
cribl_host– LogStream Node that processed the event.
cribl_wp– LogStream Worker Process that processed the event.
cribl_input– LogStream Source that processed the event.
cribl_output– LogStream Destination that processed the event.
Batch size: The maximum number of items the Google API should batch before it sends them to the topic. Defaults to
Batch timeout (ms): The maximum interval (in milliseconds) that the Google API should wait to send a batch (if the configured Batch size limit has not been reached).. Defaults to
Max queue size: Maximum number of queued batches before blocking. Defaults to
Max batch size (KB): Maximum size for each sent batch. Defaults to
Max concurrent requests: The maximum number of in-progress API requests before LogStream applies backpressure. Defaults to
Environment: If you're using GitOps, optionally use this field to specify a single Git branch on which to enable this configuration. If empty, the config will be enabled everywhere.
Google Cloud Roles and Permissions
Your Google Cloud service account should have at least the following roles on topics:
To enable LogStream's Create topic option, your service account should have one of the following (or higher) roles:
editor role confers multiple permissions, including those from the lower
publisher roles. For additional details, see the Google Cloud Access Control topic.
Let's Change the Topic
The Pub/Sub Destination supports alternate topics specified at the event level in the
__topicOut field. So (e.g.) if a Pub/Sub Destination is configured to send to main topic
topic1, and LogStream receives an event with
__topicOut: topic2, then LogStream will override the main topic and send this event to
However, a topic specified in the event's
__topicOut field must already exist on Pub/Sub. If it does not, LogStream cannot dynamically create the topic, and will drop the event. On the Destination's Status tab, the Dropped metric tracks the number of events dropped because a specified alternate topic did not exist.