Home / Stream/ Destinations/ Azure·Azure Sentinel

Azure Sentinel

Cribl Stream can send log and metric events to the Azure Sentinel SIEM. This Destination encrypts and sends events via the HTTPS protocol. (These docs use the terms “Azure Sentinel” and “Microsoft Sentinel” interchangeably.)

Type: Streaming | TLS Support: No | PQ Support: Yes

Prerequisites

Before configuring your Azure Sentinel Destination(s), you have to prepare the Azure workspace, create data collection rules (DCRs), and obtain the ingestion URL that will receive the data from your Destination(s).

Complete these preparatory steps, which are described in Azure Sentinel Integration, before configuring the Azure Sentinel Destination. That topic explains the overall Sentinel/Cribl workflow, provides necessary context, and shows you how to obtain the values you’ll need to configure the Destination.

Behind the scenes, this Destination uses the Azure Monitor Logs Ingestion API.

Configuring Cribl Stream to Output to Azure Sentinel

From the top nav, click Manage, then select a Worker Group to configure. Next, you have two options:

To configure via the graphical QuickConnect UI, click Routing > QuickConnect (Stream) or Collect (Edge). Next, click Add Destination at right. From the resulting drawer’s tiles, select Azure Sentinel. Next, click either Add Destination or (if displayed) Select Existing. The resulting drawer will provide the options below.

Or, to configure via the Routing UI, click Data > Destinations (Stream) or More > Destinations (Edge). From the resulting page’s tiles or the Destinations left nav, select Azure Sentinel. Next, click Add Destination to open a New Destination modal that provides the options below.

If you intend to send multiple event types, or tables, to Sentinel, you’ll need a separate Destination for each event type. There are three approaches to configure this:

  • Multiple Destinations: Configure separate Microsoft Sentinel Destinations, each with its own URL, and use Routes to direct data to the appropriate Destination.
  • Multiple Destinations and a Output Router: Use an Output Router to split events from a single Source into multiple Destinations, each corresponding to a specific table.
  • Single Destination and overwriting the URL in a Pipeline: Configure a single Microsoft Sentinel Destination and dynamically overwrite the URL within Pipelines using the __url internal field to direct events to the correct table.

General Settings

Output ID: Enter a unique name to identify this HTTP output definition.

Endpoint configuration: Method for configuring the endpoint. Options include:

  • URL – lets you directly enter the data collection endpoint. This method is the simplest way of configuring the endpoint. See Obtaining a URL for more information.

  • ID – lets you enter individual IDs that Cribl Stream uses to create the URL used as the data collection endpoint.

Selecting URL exposes the following field.

URL: Endpoint URL to send events to. The internal field __url, where present in events, will override the URL and ID values. See Internal Fields below.

Selecting ID exposes the following fields.

Data collection endpoint: Data collection endpoint (DCE) in the format https://<endpoint-name>.<identifier>.<region>.ingest.monitor.azure.com.

Data collection rule ID: Immutable ID for the data collection rule (DCR).

Stream name: Name of the Sentinel table in which to store events.

Optional Settings

Backpressure behavior: Whether to block, drop, or queue events when all receivers are exerting backpressure.

Tags: Optionally, add tags that you can use to filter and group Destinations in Cribl Stream’s Manage Destinations page. These tags aren’t added to processed events. Use a tab or hard return between (arbitrary) tag names.

Authentication Settings

  • Login URL: Endpoint for the OAuth API call, which is expected to be a POST call. This URL takes the form https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/token.

  • OAuth secret: Secret parameter value to pass in the API call’s request body.

  • Client ID: JavaScript expression used to compute the application (client) ID for the Azure application. Can be a constant.

Persistent Queue Settings

This tab is displayed when the Backpressure behavior is set to Persistent Queue.

On Cribl-managed Cribl.Cloud Workers (with an Enterprise plan), this tab exposes only the destructive Clear Persistent Queue button (described below in this section). A maximum queue size of 1 GB disk space is automatically allocated per PQ‑enabled Destination, per Worker Process. The 1 GB limit is on outbound uncompressed data, and no compression is applied to the queue.

This limit is not configurable. If the queue fills up, Cribl Stream will block outbound data. To configure the queue size, compression, queue-full fallback behavior, and other options below, use a hybrid Group.

Max file size: The maximum data volume to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to 1 MB.

Max queue size: The maximum amount of disk space that the queue is allowed to consume on each Worker Process. Once this limit is reached, this Destination will stop queueing data and apply the Queue‑full behavior. Required, and defaults to 5 GB. Accepts positive numbers with units of KB, MB, GB, etc. Can be set as high as 1 TB, unless you’ve configured a different Max PQ size per Worker Process in Group Settings.

Queue file path: The location for the persistent queue files. Defaults to $CRIBL_HOME/state/queues. To this value, Cribl Stream will append /<worker‑id>/<output‑id>.

Compression: Codec to use to compress the persisted data, once a file is closed. Defaults to None; Gzip is also available.

Queue-full behavior: Whether to block or drop events when the queue is exerting backpressure (because disk is low or at full capacity). Block is the same behavior as non-PQ blocking, corresponding to the Block option on the Backpressure behavior drop-down. Drop new data throws away incoming data, while leaving the contents of the PQ unchanged.

Strict ordering: The default Yes position enables FIFO (first in, first out) event forwarding. When receivers recover, Cribl Stream will send earlier queued events before forwarding newly arrived events. To instead prioritize new events before draining the queue, toggle this off. Doing so will expose this additional control:

  • Drain rate limit (EPS): Optionally, set a throttling rate (in events per second) on writing from the queue to receivers. (The default 0 value disables throttling.) Throttling the queue’s drain rate can boost the throughput of new/active connections, by reserving more resources for them. You can further optimize Workers’ startup connections and CPU load at Group Settings > Worker Processes.

Processing Settings

Post‑Processing

Pipeline: Pipeline to process data before sending the data out using this output.

System fields: A list of fields to automatically add to events that use this output. By default, includes cribl_pipe (identifying the Cribl Stream Pipeline that processed the event). Supports wildcards. Other options include:

  • cribl_host – Cribl Stream Node that processed the event.
  • cribl_input – Cribl Stream Source that processed the event.
  • cribl_output – Cribl Stream Destination that processed the event.
  • cribl_route – Cribl Stream Route (or QuickConnect) that processed the event.
  • cribl_wp – Cribl Stream Worker Process that processed the event.

Advanced Settings

Validate server certs: Reject certificates that are not authorized by a trusted CA (e.g., the system’s CA). Toggle this to No if you want Cribl Stream to accept such certificates. Defaults to Yes.

Round-robin DNS: Toggle on to enable round-robin DNS lookup across multiple IP addresses, IPv4 and IPv6. When a DNS server resolves a Fully Qualified Domain Name (FQDN) to multiple IP addresses, Cribl Stream will sequentially use each address in the order they are returned by the DNS server for subsequent connection attempts.

Compress: By default, Cribl Stream enables gzip-compress to compress the payload body before sending. Toggle this off if you want Cribl Stream not to gzip-compress the payload body.

Keep alive: By default, Cribl Stream sends Keep-Alive headers to the remote server and preserves the connection from the client side up to a maximum of 120 seconds. Toggle this off if you want Cribl Stream to close the connection immediately after sending a request.

Request timeout: Amount of time (in seconds) to wait for a request to complete before aborting it. Defaults to 30.

Request concurrency: Maximum number of concurrent requests per Worker Process. When Cribl Stream hits this limit, it begins throttling traffic to the downstream service. Defaults to 5. Minimum: 1. Maximum: 32.

Max body size (KB): Maximum size of the request body before compression. Defaults to 1000 KB, the maximum allowed size of API call. Be aware that high values can cause high memory usage per Worker Node, especially if a dynamically constructed URL (see Internal Fields) causes this Destination to send events to more than one URL. The actual request body size might exceed the specified value because the Destination adds bytes when it writes to the downstream receiver. Cribl recommends that you experiment with the Max body size value until downstream receivers reliably accept all events.

Max events per request: Maximum number of events to include in the request body. The 0 default allows unlimited events.

Flush period (sec): Maximum time between requests. Low values could cause the payload size to be smaller than its configured maximum. Defaults to 1.

Extra HTTP headers: Name-value pairs to pass to all events as additional HTTP headers. Values will be sent encrypted. You can also add headers dynamically on a per-event basis in the __headers field. See Internal Fields below.

Failed request logging mode: Use this drop-down to determine which data should be logged when a request fails. Select among None (the default), Payload, or Payload + Headers. With this last option, Cribl Stream will redact all headers, except non-sensitive headers that you declare below in Safe headers.

Safe headers: Add headers to declare them as safe to log in plaintext. (Sensitive headers such as authorization will always be redacted, even if listed here.) Use a tab or hard return to separate header names.

Environment: If you’re using GitOps, optionally use this field to specify a single Git branch on which to enable this configuration. If empty, the config will be enabled everywhere.

Internal Fields

Cribl Stream uses a set of internal fields to assist in forwarding data to a Destination.

Fields for this Destination:

  • __criblMetrics
  • __url
  • __headers

If an event contains the internal field __criblMetrics, Cribl Stream will send it to the HTTP endpoint as a metric event. Otherwise, Cribl Stream will send it as a log event.

If an event contains the internal field __url, that field’s value will override the General Settings > URL value. This way, you can determine the endpoint URL dynamically.

For example, you could create a Pipeline containing an Eval Function that adds the __url field, and connect that Pipeline to your Azure Sentinel Destination. Configure the Eval Function to set __url’s value to a URL that varies depending on a global variable, or on some property of the event, or on some other dynamically-generated value that meets your needs.

If an event contains the internal field __headers, that field’s value will be a JSON object containing Name-value pairs, each of which defines a header. By creating Pipelines that set the value of __headers according to conditions that you specify, you can add headers dynamically.

For example, you could create a Pipeline containing Eval Functions that add the __headers field, and connect that Pipeline to your Azure Sentinel Destination. Configure the Eval Functions to set __headers values to Name-value pairs that vary depending on some properties of the event, or on dynamically-generated values that meet your needs.

Here’s an overview of how to add headers:

  • To add “dynamic” (a.k.a. “custom”) headers to some events but not others, use the __headers field.
  • To define headers to add to all events, use Advanced Settings > Extra HTTP Headers.
  • An event can include headers added by both methods. So if you use __headers to add { "api‑key": "foo" } and Extra HTTP Headers to add { "goat": "Kid A" }, you’ll get both headers.
  • Headers added via the __headers field take precedence. So if you use __headers to add { "api‑key": "foo" } and Extra HTTP Headers to add { "api‑key": "bar" }, you’ll get only one header, and that will be { "api‑key": "foo" }.

Notes on HTTP-Based Outputs

  • To proxy outbound HTTP/S requests, see System Proxy Configuration.

  • Cribl Stream will attempt to use keepalives to reuse a connection for multiple requests. After two minutes of the first use, it will throw away the connection and attempt a new one. This is to prevent sticking to a particular destination when there is a constant flow of events.

  • If the server does not support keepalives (or if the server closes a pooled connection while idle), Cribl Stream will establish a new connection for the next request.

  • When resolving the Destination’s hostname, Cribl Stream will pick the first IP in the list for use in the next connection. Enable Round‑robin DNS to better balance distribution of events among destination cluster nodes.