These docs are for Cribl Edge 4.0 and are no longer actively maintained.
See the latest version (4.13).
Azure Monitor Logs
Cribl Edge supports sending data to Azure Monitor Logs.
Type: Streaming | TLS Support: Yes | PQ Support: Yes
Configuring Cribl Edge to Output to Azure Monitor Logs
From the top nav, click Manage, then select a Fleet to configure. Next, you have two options:
To configure via the graphical QuickConnect UI, click Routing > QuickConnect (Stream) or Collect (Edge). Next, click + Add Destination at right. From the resulting drawer’s tiles, select Azure > Monitor Logs. Next, click either + Add Destination or (if displayed) Select Existing. The resulting drawer will provide the options below.
Or, to configure via the Routing UI, click Data > Destinations (Stream) or More > Destinations (Edge). From the resulting page’s tiles or the Destinations left nav, select Azure > Monitor Logs. Next, click New Destination to open a New Destination modal that provides the options below.
General Settings
Output ID: Enter a unique name to identify this Azure Monitor Logs definition.
Log type: The Record Type of events sent to this LogAnalytics workspace. Defaults to Cribl
.
Authentication Settings
Authentication method: Use the buttons to select one of these options:
Manual: Displays fields in which to enter your Azure Log Analytics Workspace ID and your Primary or Secondary Shared Workspace key. See the Azure Monitor documentation.
Secret: This option exposes a Secret key pair drop-down, in which you can select a stored secret that references the credentials described above. A Create link is available to store a new, reusable secret.
Optional Settings
DNS name of API endpoint: Enter the DNS name of the Log API endpoint that sends log data to a Log Analytics workspace in Azure Monitor. Defaults to: .ods.opinsights.azure.com.
Cribl Edge will add a prefix and suffix around this DNS name, to construct a URI in this format:https://<Workspace_ID><your_DNS_name>/api/logs?api-version=<API version>
.
Resource ID: Resource ID of the Azure resource to associate the data with. This populates the _ResourceId
property, and allows the data to be included in resource-centric queries. (Optional, but if this field is not specified, the data will not be included in resource-centric queries.)
Backpressure behavior: Whether to block, drop, or queue events when all receivers are exerting backpressure. Defaults to Block
.
Tags: Optionally, add tags that you can use for filtering and grouping at the final destination. Use a tab or hard return between (arbitrary) tag names.
Persistent Queue Settings
This tab is displayed when the Backpressure behavior is set to Persistent Queue.
On Cribl-managed Cribl.Cloud Workers (with an Enterprise plan), this tab exposes only the Clear Persistent Queue button. A maximum queue size of 1 GB disk space is automatically allocated per Worker Process.
Max file size: The maximum data volume to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to 1 MB
.
Max queue size: The maximum amount of disk space the queue is allowed to consume. Once this limit is reached, Cribl Edge stops queueing and applies the fallback Queue‑full behavior. Enter a numeral with units of KB, MB, etc.
Queue file path: The location for the persistent queue files. Defaults to $CRIBL_HOME/state/queues
. To this value, Cribl Edge will append /<worker‑id>/<output‑id>
.
Compression: Codec to use to compress the persisted data, once a file is closed. Defaults to None
; Gzip
is also available.
Queue-full behavior: Whether to block or drop events when the queue is exerting backpressure (because disk is low or at full capacity). Block is the same behavior as non-PQ blocking, corresponding to the Block option on the Backpressure behavior drop-down. Drop new data throws away incoming data, while leaving the contents of the PQ unchanged.
Clear persistent queue: Click this button if you want to flush out files that are currently queued for delivery to this Destination. A confirmation modal will appear. (Appears only after Output ID has been defined.)
Processing Settings
Post‑Processing
Pipeline: Pipeline to process data before sending the data out using this output.
System fields: A list of fields to automatically add to events that use this output. By default, includes cribl_pipe
(identifying the Cribl Edge Pipeline that processed the event). Supports wildcards. Other options include:
cribl_host
– Cribl Edge Node that processed the event.cribl_wp
– Cribl Edge Worker Process that processed the event.cribl_input
– Cribl Edge Source that processed the event.cribl_output
– Cribl Edge Destination that processed the event.
Advanced Settings
Validate server certs: Toggle to Yes
to reject certificates that are not authorized by a CA in the CA certificate path, nor by another trusted CA (e.g., the system’s CA).
Round-robin DNS: Toggle on to enable round-robin DNS lookup across multiple IP addresses, IPv4 and IPv6. When a DNS server resolves a Fully Qualified Domain Name (FQDN) to multiple IP addresses, Cribl Edge will sequentially use each address in the order they are returned by the DNS server for subsequent connection attempts.
Request timeout: Amount of time (in seconds) to wait for a request to complete before aborting it. Defaults to 30
.
Request concurrency: Maximum number of concurrent requests per Worker Process. When Cribl Edge hits this limit, it begins throttling traffic to the downstream service. Defaults to 5
. Minimum: 1
. Maximum: 32
.
Max body size (KB): Maximum size of the request body before compression. Defaults to 4096
KB in v.4.0.0 to v.4.0.2. Defaults to 1024
KB in v.4.0.3. The actual request body size might exceed the specified value because the Destination adds bytes when it writes to the downstream receiver. Cribl recommends that you experiment with the Max body size value until downstream receivers reliably accept all events.
Max events per request: Maximum number of events to include in the request body. The 0
default allows unlimited events.
Flush period (sec): Maximum time between requests. Low settings could cause the payload size to be smaller than its configured maximum. Defaults to 1
.
Extra HTTP headers: Name-value pairs to pass as additional HTTP headers.
Failed request logging mode: Use this drop-down to determine which data should be logged when a request fails. Select among None
(the default), Payload
, or Payload + Headers
. With this last option, Cribl Edge will redact all headers, except non-sensitive headers that you declare below in Safe headers.
Safe headers: Add headers to declare them as safe to log in plaintext. (Sensitive headers such as authorization
will always be redacted, even if listed here.) Use a tab or hard return to separate header names.
Environment: If you’re using GitOps, optionally use this field to specify a single Git branch on which to enable this configuration. If empty, the config will be enabled everywhere.
Azure Monitor Limitations
The Azure Monitor Logs architecture limits the number of columns per table, characters per column name, and other parameters. For details, see Microsoft’s Azure Monitor Service Limits topic.
Azure will drop logs if your data exceeds these limits. To diagnose this, you can search in the Azure Data Explorer console with a query like this:
Operation | summarize count() by Detail
…for error messages of this form:
Data of type <type> was dropped: The number of custom fields <number> is above the limit of 500 fields per data type.
Notes on HTTP-based Outputs
Cribl Edge will attempt to use keepalives to reuse a connection for multiple requests. After two minutes of the first use, the connection will be thrown away, and a new one will be reattempted. This is to prevent sticking to a particular Destination when there is a constant flow of events.
If keepalives are not supported by the server (or if the server closes a pooled connection while idle), a new connection will be established for the next request.
When resolving the Destination’s hostname, Cribl Edge will pick the first IP in the list for use in the next connection. Enable Round-robin DNS to better balance distribution of events between destination cluster nodes.