Home / Stream/ Integrations/ Destinations/CrowdStrike Falcon Next-Gen SIEM Destination

CrowdStrike Falcon Next-Gen SIEM Destination

The CrowdStrike Falcon Next-Gen SIEM Destination sends data to CrowdStrike Falcon Next-Gen SIEM.

Type: Streaming | TLS Support: Configurable | PQ Support: Yes

Prerequisites

Connector

Within CrowdStrike Falcon Next-Gen SIEM’s Data onboarding section, create a Cribl Data Connector. Select Cribl as the connector type during setup to leverage future functionalities and enhancements unavailable through generic HTTP Event Collector (HEC) connectors. This also allows you to filter ingested Data Sources by Cribl Data Connector for streamlined management of Cribl Stream data streams.

SIEM Parser

Each Next-Gen SIEM connector requires an assigned parser. Map each connector to the appropriate parser for its vendor/product combination. If you have multiple connectors for the same vendor/partner, you can reuse the same parser. If no suitable parser exists, create a new one or clone and modify an existing parser with a similar data format.

By default, CrowdStrike Falcon Next-Gen SIEM parsers expect data in the vendor’s original format. Ensure you only send the _raw field. For standard data ingestion, use the passthru parser. This will preserve the original event format sent to Next-Gen SIEM. However, some products, like Zscaler, usually require specific output formats.

See how to test your SIEM parser.

Syslog

When using a Cribl Stream Syslog Source, explicitly enter _raw in the Fields to keep list under the Syslog Source’s Optional Settings. This prevents Cribl Stream from automatically parsing the syslog event into multiple fields, which would unnecessarily increase the event size and ensures that the original syslog event is routed from the Source to the CrowdStrike Falcon Next-Gen SIEM Destination.

Scaling and Performance Considerations

To ensure optimal performance and avoid data loss, adhere to the following guidelines regarding connector sizing and scaling:

Each Next-Gen SIEM (NG SIEM) connector has a maximum ingest capacity of 10,000 events per second (EPS). If your data volume exceeds this limit, create additional NG SIEM connectors as needed. A corresponding Cribl Stream CrowdStrike Falcon Next-Gen SIEM Destination must be configured for each NG SIEM connector.

For example, consider a scenario where you generate three terabytes of firewall logs daily, with an average event size of 1,250 bytes. To determine your EPS, you can utilize the Cribl Stream monitoring view to track peak EPS over a 24-hour period. Alternatively, if your data rate is consistent, you can calculate the average EPS using the following formula:

(3 x 10^12 bytes/day) x (1 event/1250 bytes) x (1 day/86400 seconds) = 27,777 EPS

In this example, you would need to create at least three Cribl Stream CrowdStrike Falcon Next-Gen SIEM Destinations, each pointing to a separate NG SIEM connector that utilizes the same firewall parser. If your peak EPS consistently exceeds the average during certain periods, provision additional connectors to handle the increased load.

Configuring a CrowdStrike Falcon Next-Gen SIEM Destination

  1. From the top nav, click Manage, then select a Worker Group to configure. Next, you have two options:

    • To configure via QuickConnect, click Routing then QuickConnect (Cribl Stream) or Collect (Cribl Edge). Next, click Add Destination and from the resulting drawer’s tiles, select CrowdStrike Falcon Next-Gen SIEM. Next, click either Add Destination or (if displayed) Select Existing.
    • To configure via the Routes, click Data then Destinations (Cribl Stream) or More then Destinations (Cribl Edge). Select CrowdStrike Falcon Next-Gen SIEM from the list of tiles or the Destinations list. Next, click Add Destination to open a New Destination modal.
  2. In the Destination modal, configure the following under General Settings:

    • Output ID: Enter a unique name to identify this Destination definition. If you clone this Destination, Cribl Stream will add -CLONE to the original Output ID.
    • Next-Gen SIEM endpoint: The URL for sending data to CrowdStrike’s Falcon Next-Gen SIEM. This URL is generated within the CrowdStrike platform and directs your data to the appropriate SIEM instance for processing.
    • Request format: Select JSON as the request format. CrowdStrike Falcon Next-Gen SIEM expects events to be encapsulated within a JSON structure. The JSON option ensures proper data delivery.

    The Raw option sends only the event’s _raw value. This option is intended for advanced use cases where specific data manipulation or custom formatting is required. Use with caution, as it may not be compatible with standard Next-Gen SIEM ingest requirements.

  3. Under Authentication select an Authentication method from the drop-down:

    • Manual: Displays a Next-Gen SIEM auth token field for you to enter your authentication token.
    • Secret: Displays a Next-Gen SIEM auth token (text secret) drop-down, in which you can select a stored secret that references the API token described above. A Create link is available to store a new, reusable secret.
  4. Next, you can configure the following Optional Settings that you’ll find across many Cribl Destinations:

    • Backpressure behavior: Select whether to block, drop, or queue events when all receivers are exerting backpressure. (Causes might include a broken or denied connection, or a rate limiter.) Defaults to Block.
    • Tags: Optionally, add tags that you can use to filter and group Destinations on the Destinations page. These tags aren’t added to processed events. Use a tab or hard return between (arbitrary) tag names.
  5. Under Processing > Post-Processing keep the cribl_pipe field to assist with mapping and troubleshooting within NG SIEM. The payload sent to NG SIEM must contain only the _raw field. Do not include _time, host, or any other fields. NG SIEM ingest licensing is based on the size of the entire payload, and default NG SIEM parsers are designed to process only the _raw field.

    • To retain only the _raw field and discard all others, configure an Eval Function within a post-processing Pipeline as follows:
      • In the Keep fields input, enter _raw.
      • In the Remove fields input, enter *.
  6. Optionally, configure any Persistent Queue, Retries, and Advanced settings outlined in the below sections.

  7. Select Save, then Commit & Deploy.

  8. Use the Test tab in the Destination’s configuration modal to validate the configuration and connectivity of your Destination.

Persistent Queue Settings

The Persistent Queue Settings tab displays when the Backpressure behavior option in General settings is set to Persistent Queue. Persistent queue buffers and preserves incoming events when a downstream Destination has an outage or experiences backpressure.

Before enabling persistent queue, learn more about persistent queue behavior and how to optimize it with your system:

On Cribl-managed Cloud Workers (with an Enterprise plan), this tab exposes only the destructive Clear Persistent Queue button (described at the end of this section). A maximum queue size of 1 GB disk space is automatically allocated per PQ‑enabled Destination, per Worker Process. The 1 GB limit is on outbound uncompressed data, and no compression is applied to the queue.

This limit is not configurable. If the queue fills up, Cribl Stream/Edge will block outbound data. To configure the queue size, compression, queue-full fallback behavior, and other options below, use a hybrid Group.

Mode: Use this menu to select when Cribl Stream/Edge engages the persistent queue in response to backpressure events from this Destination. The options are:

ModeDescription
ErrorQueues and stores data on a disk only when the Destination is in an error state.
BackpressureAfter the Destination has been in a backpressure state for a specified amount of time, Cribl Stream/Edge queues and stores data to a disk until the backpressure event resolves.
Always onCribl Stream/Edge immediately queues and stores all data on a disk for all events, even when there is no backpressure.

If a Worker/Edge Node starts with an invalid Mode setting, it automatically switches to Error mode. This might happen if the Worker/Edge Node is running a version that does not support other modes (older than 4.9.0), or if it encounters a nonexistent value in YAML configuration files.

Max file size: The maximum data volume to store in each queue file before closing it. Enter a numeral with units of KB, MB, etc. Defaults to 1 MB.

Max queue size: The maximum amount of disk space that the queue can consume on each Worker Process. When the queue reaches this limit, the Destination stops queueing data and applies the Queue‑full behavior. Defaults to 5 GB. This field accepts positive numbers with units of KB, MB, GB, and so on. You can set it as high as 1 TB, unless you’ve configured a different Max PQ size per Worker Process on the Group Settings/Fleet Settings page.

Queue file path: The location for the persistent queue files. Defaults to $CRIBL_HOME/state/queues. Cribl Stream/Edge will append /<worker‑id>/<output‑id> to this value.

Compression: Set the codec to use when compressing the persisted data after closing a file. Defaults to None. Gzip is also available.

Queue-full behavior: Whether to block or drop events when the queue begins to exert backpressure. A queue begins to exert backpressure when the disk is low or at full capacity. This setting has two options:

  • Block: The output will refuse to accept new data until the receiver is ready. The system will return block signals back to the sender.
  • Drop new data: Discard all new events until the backpressure event has resolved and the receiver is ready.

Backpressure duration Limit: When Mode is set to Backpressure, this setting controls how long to wait during network slowdowns before activating queues. A shorter duration enhances critical data loss prevention, while a longer duration helps avoid unnecessary queue transitions in environments with frequent, brief network fluctuations. The default value is 30 seconds.

Strict ordering: By default, the Yes setting enables FIFO (first in, first out) event forwarding, ensuring Cribl Stream/Edge sends earlier queued events first when receivers recover. The persistent queue flushes every 10 seconds in this mode. Changing the setting to No allows you to prioritize new events over queued events and configure a custom drain rate for the queue. When No is enabled, this option appears:

  • Drain rate limit (EPS): Optionally, set a throttling rate (in events per second) on writing from the queue to receivers. (The default 0 value disables throttling.) Throttling the queue drain rate can boost the throughput of new and active connections by reserving more resources for them. You can further optimize Worker startup connections and CPU load in the Group Settings/Fleet Settings > Worker Processes settings.

Clear Persistent Queue: For Cloud Enterprise only, click this button if you want to delete the files that are currently queued for delivery to this Destination. If you click this button, a confirmation modal appears. Clearing the queue frees up disk space by permanently deleting the queued data, without delivering it to downstream receivers. This button only appears after you define the Output ID.

Use the Clear Persistent Queue button with caution to avoid data loss. See Steps to Safely Disable and Clear Persistent Queues for more information.

Processing Settings

Post‑Processing

Pipeline: Pipeline or Pack to process data before sending the data out using this output.

System fields: A list of fields to automatically add to events that use this output. By default, includes cribl_pipe (identifying the Cribl Stream Pipeline that processed the event). Supports wildcards. Other options include:

  • cribl_host – Cribl Stream Node that processed the event.
  • cribl_input – Cribl Stream Source that processed the event.
  • cribl_output – Cribl Stream Destination that processed the event.
  • cribl_route – Cribl Stream Route (or QuickConnect) that processed the event.
  • cribl_wp – Cribl Stream Worker Process that processed the event.

Retries

Honor Retry-After header: Whether to honor a Retry-After header, provided that the header specifies a delay no longer than 180 seconds. Cribl Stream/Edge limits the delay to 180 seconds even if the Retry-After header specifies a longer delay. When enabled, any Retry-After header received takes precedence over all other options configured in the Retries section. When disabled, all Retry-After headers are ignored.

Settings for failed HTTP requests: When you want to automatically retry requests that receive particular HTTP response status codes, use these settings to list those response codes.

For any HTTP response status codes that are not explicitly configured for retries, Cribl Stream/Edge applies the following rules:

Status CodeAction
Any in the 1xx, 3xx, or 4xx seriesDrop the request
Any in the 5xx seriesRetry the request

Upon receiving a response code that’s on the list, Cribl Stream/Edge first waits for a set time interval called the Pre-backoff interval and then begins retrying the request. Time between retries increases based on an exponential backoff algorithm whose base is the Backoff multiplier, until the backoff multiplier reaches the Backoff limit (ms). At that point, Cribl Stream/Edge continues retrying the request without increasing the time between retries any further.

If the sender (which manages the connection to the Destination) is at capacity, it will not accept any incoming events. These incoming events originate internally from a previous stage of the data flow when Destinations send outbound requests to their respective external services, and they include retry requests and new requests. Any events that were already in transit when the sender reached capacity will continue to be processed downstream.

Sender capacity is freed up when an outgoing request succeeds or encounters a non-retryable error. When the sender has available capacity again, it will resume accepting incoming events. This capacity management is influenced by the number of active connections and configured limits, such as concurrency and buffer sizes. If a Pipeline sends events faster than the Destination can process, the buffers may fill up, leading to backpressure and Sender at capacity warnings. This backpressure prevents the sender from accepting additional requests until capacity is restored.

By default, this Destination has no response codes configured for automatic retries. For each response code you want to add to the list, select Add Setting and configure the following settings:

  • HTTP status code: A response code that indicates a failed request, for example 429 (Too Many Requests) or 503 (Service Unavailable).
  • Pre-backoff interval (ms): The amount of time to wait before beginning retries, in milliseconds. Defaults to 1000 (one second).
  • Backoff multiplier: The base for the exponential backoff algorithm. A value of 2 (the default) means that Cribl Stream/Edge will retry after 2 seconds, then 4 seconds, then 8 seconds, and so on.
  • Backoff limit (ms): The maximum backoff interval Cribl Stream/Edge should apply for its final retry, in milliseconds. Default (and minimum) is 10,000 (10 seconds); maximum is 180,000 (180 seconds, or 3 minutes).

Retry timed-out HTTP requests: When you want to automatically retry requests that have timed out, toggle this control on to display the following settings for configuring retry behavior:

  • Pre-backoff interval (ms): The amount of time to wait before beginning retries, in milliseconds. Defaults to 1000 (one second).
  • Backoff multiplier: The base for the exponential backoff algorithm. A value of 2 (the default) means that Cribl Stream/Edge will retry after 2 seconds, then 4 seconds, then 8 seconds, and so on.
  • Backoff limit (ms): The maximum backoff interval Cribl Stream/Edge should apply for its final retry, in milliseconds. Default (and minimum) is 10,000 (10 seconds); maximum is 180,000 (180 seconds, or 3 minutes).

Advanced Settings

Validate server certs: Defaults to toggled on to reject certificates that are not authorized by a CA in the CA certificate path, or by another trusted CA (for example, the system’s CA).

Round–robin DNS: Defaults to toggled on to enable round-robin DNS lookup across multiple IP addresses, IPv4 and IPv6. When a DNS server resolves a Fully Qualified Domain Name (FQDN) to multiple IP addresses, Cribl Stream will sequentially use each address in the order they are returned by the DNS server for subsequent connection attempts.

Compress: Defaults to toggled on to compress the payload body before sending.

Request timeout: Amount of time (in seconds) to wait for a request to complete before aborting it. Defaults to 30.

Request concurrency: Maximum number of concurrent requests per Worker Process. When Cribl Stream hits this limit, it begins throttling traffic to the downstream service. Defaults to 5. Minimum: 1. Maximum: 32. Each request can potentially hit a different HEC receiver.

Max body size (KB): Maximum size, in KB, of the request body. Defaults to 4096. Lowering the size can potentially result in more parallel requests and also cause outbound requests to be made sooner.

Max events per request: Maximum number of events to include in the request body. The 0 default allows unlimited events.

Flush period (sec): Maximum time between requests. Low values can cause the payload size to be smaller than the configured Max body size. Defaults to 1.

  • Retries happen on this flush interval.
  • Any HTTP response code in the 2xx range is considered success.
  • Any response code in the 5xx range is considered a retryable error, which will not trigger Persistent Queue (PQ) usage.
  • Any other response code will trigger PQ (if PQ is configured as the Backpressure behavior).

Extra HTTP headers: Click Add Header to add Name/Value pairs to pass as additional HTTP headers. Values will be sent encrypted.

Failed request logging mode: Use this drop-down to determine which data should be logged when a request fails. Select among None (the default), Payload, or Payload + Headers. With this last option, Cribl Stream will redact all headers, except non-sensitive headers that you declare below in Safe headers.

Safe headers: Add headers that you want to declare as safe to log in plaintext. (Sensitive headers such as authorization will always be redacted, even if listed here.) Use a tab or hard return to separate header names.

Environment: If you’re using GitOps, optionally use this field to specify a single Git branch on which to enable this configuration. If empty, the config will be enabled everywhere.

Notes on HTTP-Based Outputs

  • To proxy outbound HTTP/S requests, see System Proxy Configuration.

  • Cribl Stream will attempt to use keepalives to reuse a connection for multiple requests. After two minutes of the first use, the connection will be thrown away, and a new connection will be reattempted. This is to prevent sticking to a particular Destination when there is a constant flow of events.

  • If the server does not support keepalives – or if the server closes a pooled connection while idle – a new connection will be established for the next request.

  • Cribl Stream will pick the first IP in the list for use in the next connection. Enable Round-robin DNS to better balance distribution of events between CrowdStrike Falcon LogScale servers.

Testing Next-Gen SIEM Parser

For data sources like syslog or flat-file logs, you can test your NG SIEM parser using sample data from Cribl Stream. This process is less applicable to API-driven data, as NG SIEM often performs pre-processing on API calls before parsing.

  1. Extract sample _raw data: In Cribl Stream, while editing the relevant Pipeline, open the sample data output and copy the content of the _raw field.
  2. Add test event to parser: In the NG SIEM parser editor, select Add test and paste the copied _raw data.
  3. Validate parsing: Select Test parser to confirm that the event fields are parsed correctly.

If you are testing a default NG SIEM parser, you must first create a copy of the parser. Then, edit the copy and add the sample test event in the Test section.

Troubleshooting

The Destination’s configuration modal has helpful tabs for troubleshooting:

Live Data: Try capturing live data to see real-time events as they flow through the Destination. On the Live Data tab, click Start Capture to begin viewing real-time data.

Logs: Review and search the logs that provide detailed information about the delivery process, including any errors or warnings that may have occurred.

Test: Ensures that the Destination is correctly set up and reachable. Verify that sample events are sent correctly by clicking Run Test.

You can also view the Monitoring page that provides a comprehensive overview of data volume and rate, helping you identify delivery issues. Analyze the graphs showing events and bytes in/out over time.