Cribl LogStream can send log and metric events to Datadog. (Datadog supports metrics only of type
rate via its REST API.)
LogStream sends events to the following Datadog endpoints in the US region. Use a DNS lookup to discover and include the corresponding IP addresses in your firewall rules' allowlist.
- Logs: https://http-intake.logs.datadoghq.com/v1/input
- Metrics: https://api.datadoghq.com/api/v1/series
Configuring Cribl LogStream to Output to Datadog
In the QuickConnect UI: Click + Add beside Destinations. From the resulting drawer's tiles, select Datadog. Next, click either + Add New or (if displayed) Select Existing. The resulting drawer will provide the following options and fields.
Or, in the Data Routes UI: From the top nav of a LogStream instance or Group, select Data > Destinations. From the resulting page's tiles or the Destinations left nav, select Datadog. Next, click + Add New to open a New Destination modal that provides the following options and fields.
Output ID: Enter a unique name to identify this Destination definition.
Authentication method: See Authentication Settings below.
Send logs as: Specify the content type to use when sending logs. Defaults to
application/json, where each log message is represented by a JSON object. The alternative
text/plain option sends one message per line, with newline
Message field: Name of the event field that contains the message to send. If not specified, LogStream sends a JSON representation of the whole event (regardless of whether Send logs as is set to JSON or plain text).
Source: Name of the source to send with logs. If you're sending logs as JSON objects (i.e., you've selected Send logs as:
application/json), the event's
source field (if set) will override this value.
Host: Name of the host to send with logs. If you're sending logs as JSON objects, the event's
host field (if set) will override this value.
Service: Name of the service to send with logs. If you're sending logs as JSON objects, the event's
__service field (if set) will override this value.
Tags: List of tags to send with logs (e.g.,
Severity: Default value for message severity. If you're sending logs as JSON objects, the event's
__severity field (if set) will override this value. Defaults to
info; the drop-down offers many other severity options.
Datadog uses the above five fields (
tags) to enhance searches and UX.
Backpressure behavior: Specify whether to block, drop, or queue events when all receivers are exerting backpressure. Defaults to
Use the Authentication method buttons to select one of these options:
Manual: Displays a field for you to enter an API key that is available in your Datadog profile.
Secret: This option exposes an API key (text secret) drop-down, in which you can select a stored secret that references the API access token described above. A Create link is available to store a new, reusable secret.
Persistent Queue Settings
This section is displayed when the Backpressure behavior is set to Persistent Queue.
Max file size: The maximum size to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to
Max queue size: The maximum amount of disk space the queue is allowed to consume. Once this limit is reached, queueing is stopped, and data blocking is applied. Enter a numeral with units of KB, MB, etc.
Queue file path: The location for the persistent queue files. This will be of the form:
your/path/here/<worker-id>/<output-id>. Defaults to
Compression: Codec to use to compress the persisted data, once a file is closed. Defaults to
Gzip to enable compression.
Queue-full behavior: Whether to block or drop events when the queue is exerting backpressure (because disk is low or at full capacity). Block is the same behavior as non-PQ blocking, corresponding to the Block option on the Backpressure behavior drop-down. Drop new data throws away incoming data, while leaving the contents of the PQ unchanged.
Pipeline: Pipeline to process data before sending the data out using this output.
System fields: A list of fields to automatically add to events that use this output. By default, includes
cribl_pipe (identifying the LogStream Pipeline that processed the event). Supports wildcards. Other options include:
cribl_host– LogStream Node that processed the event.
cribl_wp– LogStream Worker Process that processed the event.
cribl_input– LogStream Source that processed the event.
cribl_output– LogStream Destination that processed the event.
Validate server certs: Toggle to
Yes to reject certificates that are not authorized by a CA in the CA certificate path, nor by another trusted CA (e.g., the system's CA).
Round-robin DNS: Toggle to
Yes to use round-robin DNS lookup. When a DNS server returns multiple addresses, this will cause LogStream to cycle through them in the order returned.
Compress: Toggle this slider to
Yes to compress log events' payload body before sending.
Request timeout: Amount of time (in seconds) to wait for a request to complete before aborting it. Defaults to
Request concurrency: Maximum number of concurrent requests before blocking. This is set per Worker Process. Defaults to
Max body size (KB): Maximum size of the request body. Defaults to
Flush period (s): Maximum time between requests. Low values could cause the payload size to be smaller than its configured maximum. Defaults to
Extra HTTP headers: Name/Value pairs to pass as additional HTTP headers.
Environment: If you're using GitOps, optionally use this field to specify a single Git branch on which to enable this configuration. If empty, the config will be enabled everywhere.
Cribl LogStream uses a set of internal fields to assist in forwarding data to a Destination.
If an event contains the internal field
__criblMetrics, LogStream will send it to Datadog as a metric event. Otherwise, LogStream will send it as a log event.
You can use these fields to override outbound event values for log events:
No internal fields are supported for metric events.
For More Information
You might find these Datadog references helpful: