2021-01-12 – Cribl LogStream 2.4 – GA Release
- Role-based access control is now enabled on distributed deployments with an Enterprise license.
- As of this release, you can set access control on LogStream objects down to the Worker Group level.
- Control users' access to view, modify, save, and/or deploy configuration objects.
- Edit or create Policies (permissions on objects).
- Clone and customize Roles, which gather access Policies (and are equivalent to user groups in other RBAC frameworks).
- Assign individual users to Roles.
- Map LDAP, OpenID, and Splunk user groups to LogStream's Roles via the LogStream UI.
- Creating, modifying, or deleting LogStream objects (including users) now has an audit trail in Monitoring > Logs.
- The API Server Settings > Advanced section now includes optional Logout on roles change enforcement, and configurable Auth‑token TTL expiration of authentication tokens.
- LogStream 2.4 installs with a single
adminsuperuser, assigned the
adminRole, which grants all permissions on all objects. This maintains backward compatibility with earlier LogStream versions' single-role model. You can create additional users, and Role assignments, as your organization requires.
- You can now press
Ctrl+K(all platforms) or
Cmd+K(MacOS) to search across LogStream objects by keyword.
- Search results retrieve Sources, Destinations, Collectors, Routes, Pipelines, Functions, Knowledge Libraries, etc.
- Search results' visibility and access are filtered by RBAC Role.
- DNS Lookup: This Function resolves alphabetical host names to IP addresses ("A" Records) or other record types. It also provides a Reverse DNS lookup feature (which will eventually replace the corresponding Reverse DNS Function).
- Redis: This Function sets, and gets, key-value and key-hash pairs on Redis stores.
- Prometheus: Pull data from Prometheus targets, on configurable time intervals.
- AppScope: This is a new application performance monitoring tool from Cribl. It offers lightweight, language-agnostic observability into virtually any application.
- New Relic: Added support for sending events out to the New Relic Log API and New Relic Metric API.
- Datadog: Added support for sending log and metric events to Datadog.
- Sumo Logic: Added support for sending events to Sumo Logic.
- Google Cloud Storage: Added support for sending objects to Google Cloud Storage buckets.
- On active Sources and Destinations, the Live button now triggers a Live Data view, and immediate data capture, in the resulting details modal.
- On Sources and Destinations, the Live button's details modal now includes a Configure tab to switch directly between health monitoring and configuration.
- Monitoring > Routes and Monitoring > Pipelines now provide a similar Live button for immediate capture.
- The Monitoring > Logs page's search box now matches strings and substrings. Matches are case-sensitive.
- The Monitoring > Logs page's Time column now provides a drop-down to arbitrarily change rendered events' timestamps to different time zones.
- The Monitoring > Destinations page now indicates blocked events more clearly.
- The Monitoring page's rollover cursor is now synchronized across graphs.
- Collectors provide a new, configurable Throttling threshold.
- Collection jobs (scheduled and ad hoc) provide new options for automatic reschedule upon failure, maximum number of retries, and timeout interval.
- Collector configurations now open in a modal, with Run and Schedule buttons directly accessible.
- A Collection job's Preview modal now inherits an editable Filter expression (field) from the parent Run configuration modal's Filter field.
- In Collectors' Run configuration and Schedule configuration modals, the Filter field now retains previous filter expressions for reuse via a drop-down list.
- Collectors' Run configuration and Schedule configuration modals now default to relative, rather than absolute, collection time ranges.
- Collection jobs' logs now support time filtering, and sort by default to most-recent first.
- Collection jobs' most-recent results are now cached.
- The Job Inspector now displays the Collector type for collection jobs performed on behalf of the Office 365 Services, Office 365 Activity, and Prometheus Sources.
- Changing a REST Collector's Discover type to
Nonenow (as intended) prevents further attempts to run the previously configured discovery type.
- The Splunk TCP Source now supports fake/dummy acknowledgments. (Just the fACKs, Ma'am.)
- The Syslog Source's Fields to keep option allows you to retain specified input fields, while discarding the rest.
- With the Firehose Source, parsed events can now include authorization tokens, in an (internal)
- The Cribl Internal Source now creates
event_hostfields – distinct from
host– for cleaner passthrough to Splunk.
- The Cribl Internal Source's
CriblMetricsinput no longer reports summary-style, dimension-free metrics. (Examples are
total.in_events, which summed all events across all inputs; or
total.in_events#input=syslog, which summed all syslog TCP and syslog UDP events.) Removing these duplicate summations conforms to industry standards, allowing downstream metric stores to natively sum these metrics.
- Several HTTP-based Sources now report basic metrics: Splunk HEC, Elasticsearch API, HTTP/S, Raw HTTP/S, and Kinesis Firehose.
- A new
cribl_metrics_rolluppre-processing Pipeline ships with LogStream. You can configure this to aggregate LogStream internal metrics from their default 2-second granularity to longer intervals.
- All Splunk Destinations now support multiple-measurement metric data points. These output multiple metrics in a single event, enabling you to use Splunk capacity more efficiently.
- All HTTP-based Destinations (Splunk HEC, Elasticsearch, InfluxDB, etc.) now error-check for syntactically valid URLs upon save.
- To preserve data integrity, a Destination can no longer be deleted if it’s referenced by a Route, by an Output Router, or by the Default Destination.
- AWS Destinations have a new Authentication tab to clarify the selection between IAM/Assume Role versus Manual authentication (and corresponding credentials fields).
- The SQS Destination now provides a configurable Visibility timeout seconds setting to hide received messages from subsequent retrieve requests.
- Multiple non-streaming Destinations (S3, Filesystem/NFS, MinIO, Azure Blob Storage, and Google Cloud Storage) now provide an Add Output ID option. This gives each staging location a unique file path, to ensure that each configured Destination writes only to its own bucket.
- Destinations' System Fields option now provides typeahead hints upon click.
- The Mask Function now supports adding Evaluate fields: key-value expression pairs that identify events where one or more of the Masking Rules were matched.
- The Lookup Function, with the default
ExactMatch mode, now hides the Match type drop-down to avoid confusion.
- The Numerify Function's Ignore fields UI has been simplified.
- The Auto Timestamp Function has new Future timestamp allowed and Earliest timestamp allowed fields. These enable you to set realistic boundaries around parsed timestamps, and to substitute the current time for out-of-bounds (incorrectly parsed) values.
- A new
C.Time.clampmethod enables you to set similar boundaries on parsed timestamps in other Functions and Event Breakers.
- A new
C.Text.parseXmlmethod parses an XML string, returning a JSON object.
- A new
C.Text.parseWinEventmethod parses a Windows event XML string, returning a compact, prettified JSON object. This enables substantial volume reduction with no loss of data.
- Five internal methods have been renamed with prepended
__characters to prevent namespace conflicts:
- The AWS VPC Flow log Event Breaker now properly handles the Filter Condition regex's trailing
- A large number of general UX improvements.
- Several pages (including Routes, Pipelines, and Knowledge) now enable displaying/hiding columns. Their gear configuration button is replaced by a left-side button with a 3-column indicator.
- The Routes and Pipelines pages now enable Copy/Paste for Routes, Pipelines, and Pipeline groups. Open the Options (...) menu at right to display the Copy option. Copying a resource to the clipboard displays a Paste button above.
- The Routes and Pipelines pages now provide Expand All/Collapse All toggles at their upper left.
- Creating or editing a Source, Destination, or Knowledge object now opens a configuration modal, enabling expanded navigation.
- Git Deploy commands now show a spinner while large files' deployments are still in progress.
- The Git Commit Changes (diff view) modal now provides a View entire file link to expand each changed file.
- Event Breaker Rulesets now include an Enabled toggle for each rule, to facilitate testing and troubleshooting.
- You can now add .mmdb binary files through LogStream's UI, via Knowledge > Lookups. These files are not editable within LogStream, but are available to Functions like GeoIP.
- Password and Passphrase fields now provide toggles to display cleartext.
- Starting LogStream in Master mode with invalid configuration now displays an error message outlining how to to correct the config.
- LogStream now normalizes the names of lookup and Grok-pattern files with unintended spaces. Files with only spaces (no alphanumeric characters) before the filename extension can no longer be created. Leading and following spaces in new filenames are now stripped.- Users can now white-label LogStream's login page with a custom logo and text, at Settings > General Settings > Custom Login Page.
- CLI error messages have been reformatted for clarity.
- LogStream is upgraded to Node.js v.14.15.1, as a security fix to block the execution of Denial of Service attacks via DNS requests.
- Authentication tokens are now fully invalidated upon logout.
- Workers no longer need to expose port 9000 to the Master. Access to port 4200 alone is sufficient.
- Workers now correctly recognize a production license newly loaded onto the Master, and do not block inputs based on expired local license credentials.
- LogStream no longer re-encrypts unchanged values when configuration files for LogStream or LogStream jobs (
jobs.yml) are resaved.
.datfile's location has changed from the
$CRIBL_HOME/bin/subdirectory to the
$CRIBL_HOME/local/cribl/auth/subdirectory. This facilitates maintaining it in a container's persistent volume.
- You can now install and configure a Worker with tags, using commands of the form:
./cribl git [commit, deploy, commit-deploy]commands extend
gitCLI support to single-instance deployments.