These docs are for Cribl Edge 4.12 and are no longer actively maintained.
See the latest version (4.13).
Model Driven Telemetry Source
Cribl Edge supports receiving network device metrics and events via Model Driven Telemetry (MDT).
Type: Push | TLS Support: Yes | Event Breaker Support: No
For observability of network devices, MDT compares favorably with the older SNMP-based approach because:
- MDT’s messaging granularity is much finer than SNMP’s.
- MDT’s push-based model reduces load on network devices, while SNMP’s pull-based model queues requests on devices.
- MDT’s extensive ecosystem of data modules enables devices to collect many more kinds of data than they could with SNMP tools.
Prerequisites
When you configure MDT on a network device you choose among a variety of options and techniques. For this Source (in Cribl Edge version 4.5), you must configure your upstream sending device to use the following:
- MDT in dial-out mode.
- gRPC for the data transport mechanism.
- YANG models for data representation.
- Self-describing, also known as Key/Value Protocol Buffers (protobufs) for data encoding.
Configure Cribl Edge to Receive Model Driven Telemetry Data
- On the top bar, select Products, and then select Cribl Edge. Under Fleets, select a Fleet. Next, you have two options:
- To configure via QuickConnect, navigate to Routing > QuickConnect. Select Add Source and select the Source you want from the list, choosing either Select Existing or Add New.
- To configure via the Routes, select Data > Sources. Select the Source you want. Next, select Add Source.
- In the New Source modal, configure the following under General Settings:
- Input ID: Enter a unique name to identify this Source definition. If you clone this Source, Cribl Edge will add
-CLONE
to the original Input ID. - Description: Optionally, enter a description.
- Address: Enter the hostname/IP to listen to. Defaults to
0.0.0.0
. - Port: Enter the port number to listen on. Defaults to port
57000
.
- Input ID: Enter a unique name to identify this Source definition. If you clone this Source, Cribl Edge will add
- Under Authentication, enter the Auth tokens:
- Auth tokens: Shared secrets to be provided by any client. Click Generate to create a new secret. If empty, permits open access.
- Description: Optionally, enter a description.
- Fields: Fields to add to events referencing this token. Each field is a Name/Value pair, where Value is a JavaScript expression to compute field’s value, enclosed in quotes or backticks. (Can evaluate to a constant.) For details, see Authentication Fields.
- Next, you can configure the following Optional Settings:
- Tags: Optionally, add tags that you can use to filter and group Sources in Cribl Edge’s UI. These tags aren’t added to processed events. Use a tab or hard return between (arbitrary) tag names.
- Optionally, you can adjust the TLS, Persistent Queue Settings, Processing and Advanced settings, or Connected Destinations outlined in the sections below.
- Select Save, then Commit & Deploy.
TLS Settings (Server Side)
Enabled: Defaults to toggled off. When toggled on:
Certificate name: Name of the predefined certificate.
Private key path: Server path containing the private key (in PEM format) to use. Path can reference $ENV_VARS
.
Certificate path: Server path containing certificates (in PEM format) to use. Path can reference $ENV_VARS
.
CA certificate path: Server path containing CA certificates (in PEM format) to use. Path can reference $ENV_VARS
.
Authenticate client (mutual auth): Require clients to present their certificates. Used to perform mutual authentication using SSL certs. Default is toggled off. When toggled on:
Validate client certificates: Reject certificates that are not authorized by a CA in the CA certificate path, or by another trusted CA (such as the system’s CA). Default is toggled on.
Common name: Regex that a peer certificate’s subject attribute must match in order to connect. Defaults to
.*
. Matches on the substring afterCN=
. As needed, escape regex tokens to match literal characters. (For example, to match the subjectCN=worker.cribl.local
, you would enter:worker\.cribl\.local
.) If the subject attribute contains Subject Alternative Name (SAN) entries, the Source will check the regex against all of those but ignore the Common Name (CN) entry (if any). If the certificate has no SAN extension, the Source will check the regex against the single name in the CN.
Persistent Queue Settings
In the Persistent Queue Settings tab, you can optionally specify persistent queue storage, using the following controls. Persistent queue buffers and preserves incoming events when a downstream Destination has an outage or experiences backpressure.
Before enabling persistent queue, learn more about persistent queue behavior and how to optimize it with your system:
On Cribl-managed Cloud Workers (with an Enterprise plan), this tab exposes only the Enable persistent queue toggle. If enabled, PQ is automatically configured in
Always On
mode, with a maximum queue size of 1 GB disk space allocated per PQ‑enabled Source, per Worker Process.The 1 GB limit is on uncompressed inbound data, and the queue does not perform any compression. This limit is not configurable. For configurable queue size, compression, mode, and other options below, use a hybrid Group.
Enable persistent queue: Default is toggled off. When toggled on:
Mode: Select a condition for engaging persistent queues.
Always On
: This default option will always write events to the persistent queue, before forwarding them to the Cribl Stream data processing engine.Smart
: This option will engage PQ only when the Source detects backpressure from the Cribl Stream data processing engine.
Smart
mode only engages when necessary, such as when a downstream Destination becomes blocked and the Buffer size limit reaches its limit. When persistent queue is set toSmart
mode, Cribl attempts to flush the queue when every new event arrives. The only time events stay in the buffer is when a downstream Destination becomes blocked.
Buffer size limit: The maximum number of events to hold in memory before reporting backpressure to the sender and writing the queue to disk. Defaults to 1000
. This buffer is for all connections, not just per Worker Process. For that reason, this can dramatically expand memory usage. Connections share this limit, which may result in slightly lower throughput for higher numbers of connections. For higher numbers of connections, consider increasing the limit.
Commit frequency: The number of events to send downstream before committing that Stream has read them. Defaults to 42
.
File size limit: The maximum data volume to store in each queue file before closing it and (optionally) applying the configured Compression. Enter a numeral with units of KB, MB, and so forth. If not specified, Cribl Stream applies the default 1 MB
.
Queue size limit: The maximum amount of disk space that the queue is allowed to consume on each Worker Process. Once this limit is reached, this Source will stop queueing data and block incoming data. Required, and defaults to 5
GB. Accepts positive numbers with units of KB
, MB
, GB
, and so forth. Can be set as high as 1 TB
, unless you’ve configured a different Worker Process PQ size limit in Group or Fleet settings.
Queue file path: The location for the persistent queue files. Defaults to $CRIBL_HOME/state/queues
. To this field’s specified path, Cribl Stream will append /<worker-id>/inputs/<input-id>
.
Compression: Optional codec to compress the persisted data after a file closes. Defaults to None
; Gzip
is also available.
In Cribl Stream 4.1 and newer, the Source persistent queue default Mode is
Always on
, to best ensure events’ delivery. For details on optimizing this selection, see Optimize Source Persistent Queues (sPQ).You can optimize Workers’ startup connections and CPU load at Group/Fleet settings > Worker Processes.
Processing Settings
Fields
In this section, you can define new fields or modify existing ones using JavaScript expressions, similar to the Eval function.
- The Field Name can either be a new field (unique within the event) or an existing field name to modify its value.
- The Value is a JavaScript expression (enclosed in quotes or backticks) to compute the field’s value (can be a constant). Select this field’s advanced mode icon (far right) if you’d like to open a modal where you can work with sample data and iterate on results.
This flexibility means you can:
- Add new fields to enrich the event.
- Modify existing fields by overwriting their values.
- Compute logic or transformations using JavaScript expressions.
Pre-Processing
In this section’s Pipeline drop-down list, you can select a single existing Pipeline or Pack to process data from this input before the data is sent through the Routes.
Advanced Settings
Active connection limit: Maximum number of active connections allowed per Worker Process. Defaults to 1000
. Set a lower value if connection storms are causing the Source to hang. Set 0
for unlimited connections.
Shutdown timeout: Time to allow the server to shut down gracefully before forcing shutdown. Defaults to 5000
milliseconds (5 seconds).
Environment: If you’re using GitOps, optionally use this field to specify a single Git branch on which to enable this configuration. If empty, the config will be enabled everywhere.
Connected Destinations
Select Send to Routes to enable conditional routing, filtering, and cloning of data from this Source via the Routing table.
Select QuickConnect to send data from this Source to one or more Destinations via independent, direct connections.
Internal Fields
Cribl Edge uses a set of internal fields to assist in handling of data. These “meta” fields are not part of an event, but they are accessible, and Functions can use them to make processing decisions.
Fields for this Source:
__srcIpPort
Troubleshooting
The Source’s configuration modal has helpful tabs for troubleshooting:
Live Data: Try capturing live data to see real-time events as they are ingested. On the Live Data tab, click Start Capture to begin viewing real-time data.
Logs: Review and search the logs that provide detailed information about the ingestion process, including any errors or warnings that may have occurred.
You can also view the Monitoring page that provides a comprehensive overview of data volume and rate, helping you identify ingestion issues. Analyze the graphs showing events and bytes in/out over time.