Cribl LogStream supports sending events to Amazon Simple Queuing Service.
Configuring Cribl LogStream to Send Data to Amazon SQS
In the QuickConnect UI: Click + Add beside Destinations. From the resulting drawer's tiles, select Amazon > SQS. Next, click either + Add New or (if displayed) Select Existing. The resulting drawer will provide the following options and fields.
Or, in the Data Routes UI: From the top nav of a LogStream instance or Group, select Data > Destinations. From the resulting page's tiles or the Destinations left nav, select Amazon > SQS. Next, click + Add New to open a New Destination modal that provides the following options and fields.
Output ID: Enter a unique name to identify this SQS Destination.
*Queue type**: The queue type used (or created). Defaults to
FIFO (First In, First Out) is the other option.
Message group ID: This parameter applies only to queues of type FIFO. Enter the tag that specifies that a message belongs to a specific message group. (Messages belonging to the same message group are processed in FIFO order.) Defaults to
cribl. Use event field
__messageGroupId to override this value.
Create queue: Specifies whether to create the queue if it does not exist. Defaults to
Region: Region where SQS queue is located.
Backpressure behavior: Select whether to block, drop, or queue events when all receivers are exerting backpressure. (Causes might include a broken or denied connection, or a rate limiter.) Defaults to
Persistent Queue Settings
This section is displayed when the Backpressure behavior is set to Persistent Queue.
Max file size: The maximum size to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to
Max queue size: The maximum amount of disk space the queue is allowed to consume. Once this limit is reached, queueing is stopped, and data blocking is applied. Enter a numeral with units of KB, MB, etc.
Queue file path: The location for the persistent queue files. This will be of the form:
your/path/here/<worker-id>/<output-id>. Defaults to
Compression: Codec to use to compress the persisted data, once a file is closed. Defaults to
Gzip is also available.
Queue-full behavior: Whether to block or drop events when the queue is exerting backpressure (because disk is low or at full capacity). Block is the same behavior as non-PQ blocking, corresponding to the Block option on the Backpressure behavior drop-down. Drop new data throws away incoming data, while leaving the contents of the PQ unchanged.
Use the Authentication Method buttons to select an AWS authentication method.
Auto: This default option uses the AWS instance's metadata service to automatically obtain short-lived credentials from the IAM role attached to an EC2 instance. The attached IAM role grants LogStream Workers access to authorized AWS resources. Can also use the environment variables
AWS_SECRET_ACCESS_KEY. Works only when running on AWS.
Manual: If not running on AWS, you can select this option to enter a static set of user-associated IAM credentials (your access key and secret key) directly or by reference. This is useful for Workers not in an AWS VPC, e.g., those running a private cloud. The Manual option exposes these corresponding additional fields:
Access key: Enter your AWS access key. If not present, will fall back to the
env.AWS_ACCESS_KEY_IDenvironment variable, or to the metadata endpoint for IAM role credentials.
Secret key: Enter your AWS secret key. If not present, will fall back to the
env.AWS_SECRET_ACCESS_KEYenvironment variable, or to the metadata endpoint for IAM credentials.
Secret: If not running on AWS, you can select this option to supply a stored secret that references an AWS access key and secret key. The Secret option exposes this additional field:
- Secret key pair: Use the drop-down to select an API key/secret key pair that you've configured in LogStream's secrets manager. A Create link is available to store a new, reusable secret.
Enable for SQS: Toggle to
Yes to use Assume Role credentials to access SQS.
AWS account ID: Enter the SQS queue owner's AWS account ID. Leave empty if the SQS queue is in the same AWS account where this LogStream instance is located.
AssumeRole ARN: Enter the Amazon Resource Name (ARN) of the role to assume.
External ID: Enter the External ID to use when assuming role.
Pipeline: Pipeline to process data before sending the data out using this output.
System fields: A list of fields to automatically add to events that use this output. By default, includes
cribl_pipe (identifying the LogStream Pipeline that processed the event). Supports wildcards. Other options include:
cribl_host– LogStream Node that processed the event.
cribl_wp– LogStream Worker Process that processed the event.
cribl_input– LogStream Source that processed the event.
cribl_output– LogStream Destination that processed the event.
Endpoint: SQS service endpoint. If empty, the endpoint will be automatically constructed from the region.
Signature version: Signature version to use for signing SQS requests. Defaults to
Max queue size: Maximum number of queued batches before blocking. Defaults to
Max record size (KB): Maximum size of each individual record. Per the SQS spec, the maximum allowed value is 256 KB. (the default).
Flush period (sec): Maximum time between requests. Low settings could cause the payload size to be smaller than its configured maximum. Defaults to
Max concurrent requests: The maximum number of in-progress API requests before backpressure is applied. Defaults to
Reuse connections: Whether to reuse connections between requests. The default setting (
Yes) can improve performance.
Reject unauthorized certificates: Whether to accept certificates that cannot be verified against a valid Certificate Authority (e.g., self-signed certificates). Defaults to
Environment: If you're using GitOps, optionally use this field to specify a single Git branch on which to enable this configuration. If empty, the config will be enabled everywhere.
The following permissions are needed to write to an SQS queue:
Cribl LogStream uses a set of internal fields to assist in handling of data. These "meta" fields are not part of an event, but they are accessible, and functions can use them to make processing decisions.
Fields for this Destination: