In a Splunk environment, Cribl can be installed and configured as a Splunk app and depending on your architecture, it can run either on a Heavy Forwarder (strongly advised) or an Indexer.
It will run locally on that machine and receive events from the local Splunk process per routing configurations in
transforms.conf. Data is first parsed and processed by Splunk pipelines and then by Cribl. By default all data except internal indexes are routed out to Cribl right after the Typing pipeline. After processed by Cribl, they're sent to one or more Splunk receivers downstream (typically indexers).
- Select an instance where to install, e.g., a Heavy Forwarder (ADVISED)
- Ensure that ports
9000are available. See here.
- Get the bits here and install as a regular Splunk app.
- Go to
https://<instance>:9000and login with that instance's Splunk admin role credentials.
There are two deployment options for Cribl in a Splunk environment and they both involve heavy forwarders or indexers. Windows and Linux are both supported only in Option A.
Option A: Deploying Cribl Splunk App on a Splunk Heavy Forwarder (ADVISED)
Option B: Deploying Cribl Standalone on a Splunk Indexer
Note about Splunk warnings
If you come across messages similar to below, on startup, or in logs:
Invalid value in stanza [route2criblQueue]/[hecCriblQueue] in /opt/splunk/etc/apps/cribl/default/transforms.conf, line 11: (key: DEST_KEY, value: criblQueue) / line 24: (key: DEST_KEY, value: $1)
please ignore them. They are benign warns.
Cribl can natively accept data streams (un-broken events) or events from sources. In this case, the HF will deliver events locally to Cribl which processes them then sends them downstream. When receivers are Splunk indexers Cribl can also load balance across them.
When Cribl is installed as an app on a Splunk Heavy Forwarder, these are the relevant sections in configuration files that ship by default that enable Splunk to send data to Cribl.
[tcpout] disabled = false defaultGroup = cribl [tcpout:cribl] server=127.0.0.1:10000 sendCookedData=true useACK = false negotiateNewProtocol = false negotiateProtocolLevel = 0
[route2cribl] SOURCE_KEY = _MetaData:Index REGEX = ^[^_] DEST_KEY = _TCP_ROUTING FORMAT = cribl [route2criblQueue] SOURCE_KEY = _MetaData:Index REGEX = ^[^_] DEST_KEY = queue FORMAT = criblQueue
[default] TRANSFORMS-cribl = route2criblQueue, route2cribl
props.conf stanza above will apply the above transforms to everything. Depending on your requirements you may want to target a subset of your sources, sourcetypes or hosts. For example, the diagram below shows the effective configurations of
transforms.conf to send
<bluedata> events thru Cribl.
To send data from Cribl to a set of Splunk indexers, use the Cribl UI to go to Destinations | Splunk Load Balanced and enter the required information.
Cribl can natively accept data streams (un-broken events) or events from sources. In this case, data comes directly into Cribl which processes it then sends it downstream, including the local Splunk indexer instance. This is exactly like a Standalone Deployment but using a Splunk Indexer instance as the host.