These docs are for Cribl Stream 4.7 and are no longer actively maintained.
See the latest version (4.13).
Model Driven Telemetry Source
Cribl Stream supports receiving network device metrics and events via Model Driven Telemetry (MDT).
Type: Push | TLS Support: Yes | Event Breaker Support: No
For observability of network devices, MDT compares favorably with the older SNMP-based approach because:
- MDT’s messaging granularity is much finer than SNMP’s.
- MDT’s push-based model reduces load on network devices, while SNMP’s pull-based model queues requests on devices.
- MDT’s extensive ecosystem of data modules enables devices to collect many more kinds of data than they could with SNMP tools.
Prerequisites
When you configure MDT on a network device you choose among a variety of options and techniques. For this Source (in Cribl Stream version 4.5), you must configure your upstream sending device to use the following:
- MDT in dial-out mode.
- gRPC for the data transport mechanism.
- YANG models for data representation.
- Self-describing, also known as Key/Value Protocol Buffers (protobufs) for data encoding.
Configuring Cribl Stream to Receive Model Driven Telemetry Data
From the top nav, click Manage, then select a Worker Group to configure. Next, you have two options:
To configure via the graphical QuickConnect UI, click Routing > QuickConnect (Stream) or Collect (Edge). Next, click Add Source at left. From the resulting drawer’s tiles, select [Push > ] Model Driven Telemetry. Next, click either Add Destination or (if displayed) Select Existing. The resulting drawer will provide the options below.
Or, to configure via the Routing UI, click Data > Sources (Stream) or More > Sources (Edge). From the resulting page’s tiles or left nav, select [[Push > ] Model Driven Telemetry. Next, click New Source to open a New Source modal that provides the options below.
General Settings
Input ID: Enter a unique name to identify this Source definition. If you clone this Source, Cribl Stream will add -CLONE
to the original Input ID.
Address: Enter the hostname/IP to listen to. Defaults to 0.0.0.0
.
Port: Enter the port number to listen on. Defaults to port 57000
.
Optional Settings
Tags: Optionally, add tags that you can use to filter and group Sources in Cribl Stream’s Manage Sources page. These tags aren’t added to processed events. Use a tab or hard return between (arbitrary) tag names.
TLS Settings (Server Side)
Enabled defaults to No
. When toggled to Yes
:
Certificate name: Name of the predefined certificate.
Private key path: Server path containing the private key (in PEM format) to use. Path can reference $ENV_VARS
.
Certificate path: Server path containing certificates (in PEM format) to use. Path can reference $ENV_VARS
.
CA certificate path: Server path containing CA certificates (in PEM format) to use. Path can reference $ENV_VARS
.
Authenticate client (mutual auth): Require clients to present their certificates. Used to perform mutual authentication using SSL certs. Defaults to No
. When toggled to Yes
:
Validate client certs: Reject certificates that are not authorized by a CA in the CA certificate path, or by another trusted CA (such as the system’s CA). Defaults to
Yes
.Common Name: Regex that a peer certificate’s subject attribute must match in order to connect. Defaults to
.*
. Matches on the substring afterCN=
. As needed, escape regex tokens to match literal characters. (For example, to match the subjectCN=worker.cribl.local
, you would enter:worker\.cribl\.local
.) If the subject attribute contains Subject Alternative Name (SAN) entries, the Source will check the regex against all of those but ignore the Common Name (CN) entry (if any). If the certificate has no SAN extension, the Source will check the regex against the single name in the CN.
Persistent Queue Settings
In this section, you can optionally specify persistent queue storage, using the following controls. This will buffer and preserve incoming events when a downstream Destination is down, or exhibiting backpressure.
On Cribl-managed Cribl.Cloud Workers (with an Enterprise plan), this tab exposes only the Enable Persistent Queue toggle. If enabled, PQ is automatically configured in
Always On
mode, with a maximum queue size of 1 GB of disk space allocated per PQ‑enabled Source, per Worker Process.The 1 GB limit is on uncompressed inbound data, and no compression is applied to the queue. This limit is not configurable. For configurable queue size, compression, mode, and other options below, use a hybrid Group.
Enable Persistent Queue: Defaults to No
. When toggled to Yes
:
Mode: Select a condition for engaging persistent queues.
Always On
: This default option will always write events to the persistent queue, before forwarding them to Cribl Stream’s data processing engine.Smart
: This option will engage PQ only when the Source detects backpressure from Cribl Stream’s data processing engine.
Max buffer size: The maximum number of events to hold in memory before reporting backpressure to the sender and writing the queue to disk. Defaults to 1000
. (This buffer is per connection, not just per Worker Process – and this can dramatically expand memory usage.)
Commit frequency: The number of events to send downstream before committing that Cribl Stream has read them. Defaults to 42
.
Max file size: The maximum data volume to store in each queue file before closing it and (optionally) applying the configured Compression. Enter a numeral with units of KB
, MB
, and so forth. If not specified, Cribl Stream applies the default 1 MB
.
Max queue size: The maximum amount of disk space that the queue is allowed to consume on each Worker Process. Once this limit is reached, this Source will stop queueing data and block incoming data. Required, and defaults to 5 GB
. Accepts positive numbers with units of KB
, MB
, GB
, and so forth. Can be set as high as 1 TB
, unless you’ve configured a different Max PQ size per Worker Process in Group Settings.
Queue file path: The location for the persistent queue files. Defaults to $CRIBL_HOME/state/queues
. To this field’s specified path, Cribl Stream will append /<worker-id>/inputs/<input-id>
.
Compression: Optional codec to compress the persisted data after a file is closed. Defaults to None
; Gzip
is also available.
In Cribl Stream 4.1 and later, Source-side PQ’s default Mode is
Always on
. This setting helps ensure events the best chance of delivery. For details on optimizing this selection, see Always On versus Smart Mode.You can optimize Worker startup connections and CPU load at Group Settings > Worker Processes.
Processing Settings
Fields
In this section, you can add Fields to each event using Eval-like functionality.
Name: Field name.
Value: JavaScript expression to compute field’s value, enclosed in quotes or backticks. (Can evaluate to a constant.)
Pre-Processing
In this section’s Pipeline drop-down list, you can select a single existing Pipeline to process data from this input before the data is sent through the Routes.
Advanced Settings
Max active connections: Maximum number of active connections allowed per Worker Process. Defaults to 1000
. Set a lower value if connection storms are causing the Source to hang. Set 0
for unlimited connections.
Shutdown timeout: Time to allow the server to shut down gracefully before forcing shutdown. Defaults to 5000
milliseconds (5 seconds).
Environment: If you’re using GitOps, optionally use this field to specify a single Git branch on which to enable this configuration. If empty, the config will be enabled everywhere.
Connected Destinations
Select Send to Routes to enable conditional routing, filtering, and cloning of data from this Source via the Routing table.
Select QuickConnect to send data from this Source to one or more Destinations via independent, direct connections.
Internal Fields
Cribl Stream uses a set of internal fields to assist in handling of data. These “meta” fields are not part of an event, but they are accessible, and Functions can use them to make processing decisions.
Fields for this Source:
__srcIpPort
Troubleshooting
The Source’s configuration modal has helpful tabs for troubleshooting:
Live Data: Try capturing live data to see real-time events as they are ingested. On the Live Data tab, click Start Capture to begin viewing real-time data.
Logs: Review and search the logs that provide detailed information about the ingestion process, including any errors or warnings that may have occurred.
You can also view the Monitoring page that provides a comprehensive overview of data volume and rate, helping you identify ingestion issues. Analyze the graphs showing events and bytes in/out over time.