v.4.7.2 Release

PRODUCTDATERELEASEADDITIONAL RESOURCES
Stream2024-07-17MaintenanceKnown Issues, Cribl Edge 4.7.2 Release Notes

New Features

This release provides the following improvements:

Experience Improvements

The new OTLP Traces Function allows you to normalize and batch OpenTelemetry (OTLP) trace events. This Function supports both OTLP 0.10.0 and 1.3.1 versions. Batches leverage existing Resource Attributes with the option to drop non-trace events. This enhancement improves the efficiency and flexibility of handling trace data, making it easier to manage and process trace events.

Worker Group tiles on Cribl Stream’s landing page now offer health indicators for a quick preview of the groups whose Sources or Destinations are encountering issues.

You can now add tags to Edge Nodes and Stream Workers you’re teleported into. Since tags apply to a Whole Fleet or Group, you must confirm the change before saving the tags.

Cribl.Cloud Admins can now edit and delete Members and Teams.

The Final flag indicating non-default behavior in Pipelines and Routes has been updated with unique icons to better represent the data flows.

The Worker Group sizing calculator is a built-in Cribl.Cloud feature that interactively recommends how many Cribl-managed Worker Processes to provision or reprovision, based on your detailed throughput estimates. You can then make an informed choice to apply or override the recommendation, based on your specific data volume and processing needs.

We’ve enhanced your auditing capabilities by adding capture filter expressions to audit logs. This allows you to track and log the filter expressions used during data captures, providing greater transparency and traceability.

Sources and Destinations

HTTP-based Sources now support IP allowlist and denylist regex options. The allowlist regex permits requests from matching IP addresses, while the denylist regex blocks requests from matching IP addresses, even if they match the allowlist.

We’ve enhanced the configurability of Kafka, Confluent Cloud, Amazon MSK, and Azure Event Hubs Sources and Destinations by exposing new retry mechanisms. The new Retries section allows you to fine-tune retry settings to better handle transient errors and improve the reliability of your integrations.

We’ve improved the performance of the Azure Blob Storage Destination by increasing the buffer size for data uploads. This change reduces the number of operations required, resulting in faster and more cost-effective data transfers to Azure Blob Storage.

The Datadog Destination now supports sending distribution metrics via the api/v1/distribution_points endpoint. When an event contains only distribution type metrics, including those generated by the Publish Metrics Function, it will use this new endpoint to send the data to Datadog. This allows for more accurate and efficient metric data handling, ensuring that distribution metrics are properly routed and processed by Datadog.

We’ve added a DNS resolution period (sec) setting for Syslog (UDP), SNMP, and Metrics (UDP) Destinations. Setting this value above zero means DNS lookups for hostnames will occur periodically instead of on every outgoing datagram.

The Filesystem Destination has two new settings, Compression level and Writing high watermark (KB). These allow you to optimize file compression for performance and manage buffer sizes for efficient file writing, enhancing overall system efficiency and resource management.

On HTTP/S-based Destinations, when persistent queues drain, Cribl Stream and Cribl Edge now minimize the transmission of potential duplicate events that were retried on failure. However, some events that were in transit to the Destination might still emerge as duplicates.

The REST Collector now includes a new setting, Retry-After header name, that allows you to specify custom headers for handling rate limiting. This ensures compatibility with services that use non-standard headers for retry-after values.

Corrections

This release includes the following fixes:

Security Fixes

ID                       Description                 
CRIBL-25228The mysql2 library used by the Database Collector has been updated to version 3.10.0 to include the latest security updates.
CRIBL-26016When attempting to connect to a database using an invalid connection string, the URL was logged in the error logs. This behavior could expose sensitive information, such as credentials.
CRIBL-25417Using custom CA certificates with Cribl HTTP Sources and Destinations with Load Balancing enabled caused certificate verification failures. The connection would fail with a self-signed certificate in certificate chain error.
CRIBL-25951Scripts, as a security-sensitive feature, are now disabled by default in new deployments. Admin users can enable them in the settings if needed. Existing deployments are unchanged, but Admins can also disable scripts for the whole deployment.

Source and Destination Fixes

ID                       Description                 
CRIBL-25928When sending events to a Splunk HEC Destination, if the _raw field is an empty string, Cribl Stream will send an empty string as _raw instead of trying to send the full serialized event. This results in an error from Splunk and the event is dropped.
CRIBL-25927Events sent to a Splunk HEC Destination with a null _time field are dropped by Splunk, halting further processing of the entire payload (potential data loss).
CRIBL-26090Fixed a bug in the Splunk HEC Destination that caused arrays to be sent as a string instead of a multi-valued field. If you depended on this behavior previously, you can use a Pipeline to JSON.stringify() your array before sending out through the Splunk HEC Destination.
CRIBL-25431The Splunk HEC Destination was incorrectly sending the _subsecond field, causing downstream issues. This field was redundant as subsecond information is already preserved in the _time field.
CRIBL-25298Metadata fields set by the Universal Forwarder were inconsistently dropped when collecting data from a Splunk TCP Source in Cribl Stream.
CRIBL-25411Removed the redundant Authentication method and Auth token fields from the General Settings section of the Splunk Load Balanced Destination configuration. These fields were previously visible above the OPTIONAL SETTINGS and outside of the Authentication tokens group, causing confusion. Users should now configure authentication tokens within the designated group.
CRIBL-25633In certain cases, events sent to Splunk Cloud using the Splunk TCP Source and Splunk TCP Destination with S2S v3 were not ingested due to malformed subsecond fields in the timestamp.
CRIBL-25331When using the Splunk TCP Source with S2S version v4, the Source was incorrectly interpreting packet data of metrics events in some cases. As a result, some fields in the events were assigned the wrong values.
CRIBL-25018When you used REST Collectors with pagination enabled and turned on the Capture response headers toggle (introduced in the 4.6.1 release and off by default), the response headers were not captured as expected.
CRIBL-25763The REST Collector’s job state was incorrectly updated between tasks when using pagination. This could lead to inconsistent state tracking and potential data gaps or overlaps in the collected data. You could notice that the state from one paginated task affects subsequent tasks within the same job, causing unexpected behavior.
CRIBL-25418When upgrading to Cribl Stream 4.7.0, SQL Server Collectors configured to use the Config authentication method would fail to load, causing the Database Collector to stop working. The only valid authentication method for SQL Server Collectors was Connection String. Additionally, the section to manage Database Connections (under Knowledge) would not load existing connections.
CRIBL-25572Syslog Sources configured with octet-count framing (version 4.7.0 and older), or with octetCounting: true (via Manage as JSON since v.4.7.0), can malfunction if they receive unframed messages over TCP. This will cause the Source to return many warning messages like Invalid octet count: undefined. Trying to skip to next frame and the Worker may run out of memory.
CRIBL-25435Using a Prometheus Source for summary metrics, the quantile_values were formatted as an object. However, for OTLP (OpenTelemetry Protocol) serialization, quantile_values must be in array format. This discrepancy caused errors when attempting to write summary metrics to Kafka Destinations, resulting in failed metric transmissions.
CRIBL-24244When an Amazon Kinesis Data Streams Source encountered an invalid sequence number, it stopped pulling data without logging any errors. This issue occurred if the sequence number no longer existed due to data expiration or the Source was disabled for an extended period.
CRIBL-24060Timezone shifts were applied earlier than expected. This caused a few issues - events ingested into Splunk Cloud had incorrect timestamps, resulting in a one-hour discrepancy. The Auto Timestamp Function, timestamp parsing in Event Breakers, and the Syslog Source had timezone discrepancies.
CRIBL-25885MySQL database connections using the mysqls:// URL scheme did not properly initialize the TLS context. This caused connection failures when accessing MySQL databases with secure transport requirements.
CRIBL-23872When configuring multiple Event Hub Sources in Cribl Stream with different connection strings but the same consumer group ID, Azure Event Hubs denied permissions, causing the Sources to fail to connect. This issue occurred when creating multiple Sources in the same consumer group, configuring them to read from different topics, and using per-topic connection strings so that each Source could only access its topic.

Packs Fixes

ID                       Description                 
CRIBL-24973We’ve implemented a limit of 200 Pipelines per Pack to ensure optimal performance and prevent potential errors. This helps maintain stability and avoids issues that can arise from managing very large packs.
CRIBL-24516When you are using a pre-processing Pipeline coming from a Pack, the Full Preview tab in the Pipeline now displays the correct information. This issue only occurred when you were previewing from the Leader. The Full Preview from a Worker Node was not affected.

Other Functional Fixes

ID                       Description                 
CRIBL-25539When a Leader Node failed over, the state for REST Collectors, Database Collectors, Wiz Sources, Kinesis Sources, and Windows Event Forwarder (WEF) Sources was not preserved. The state is now preserved as expected.
CRIBL-12868When you enabled the GitOps Push workflow, you were unable to run ad hoc Collection jobs. You can now run ad hoc collections as expected.
CRIBL-22623In GitOps environments, inactive Destinations were causing data flow to be blocked because the default backpressure behavior was to block. When a Destination was inactive, it triggered backpressure, preventing data from being processed and sent to other active Destinations. The default behavior is now set to drop data so data flow is not blocked.
CRIBL-25559Users assigned the Editor permission within a Cribl Stream Project were unable to edit Pipelines in that Project. When attempting to edit a Pipeline, the user encounters a permissions error. Cribl Stream project editors can now edit Project Pipelines.
CRIBL-25888Groups and Fleets with similar names (hyphen vs. underscore) caused configuration conflicts. This update eliminates naming-based configuration conflicts, improving the reliability of UI configuration workflows.
CRIBL-25304Lowering the Max number of metrics limit could cause errors similar to Cannot read properties of null (reading 'trim') when sending CriblMetrics out through a Destination.
CRIBL-24912Users can now access the Monitoring > System > Job Inspector page for Cribl-managed Worker Groups in Cribl.Cloud using the UI. Previously, this page would not load and would display an error. Customer-managd hybrid Worker Groups were not affected.
CRIBL-25151Users can now set the logging level for customer-managed (hybrid) Worker Groups in Cribl.Cloud. Previously, the Provisioned setting for these Groups was incorrectly set to No instead of N/A, which kept users from being able to set the logging level.
CRIBL-25702The jemalloc library now loads correctly for on-prem deployments that use the x86_64 architecture.