Cribl - Docs

Getting started with Cribl LogStream

Questions? We'd love to help you! Meet us in #cribl (sign up)

Changelog    Guides

Splunk App Deployment

Deploying Cribl as a Splunk App


In a Splunk environment, Cribl can be installed and configured as a Splunk app and depending on your architecture, it can run either on a Heavy Forwarder (strongly advised) or an Indexer.

It will run locally on that machine and receive events from the local Splunk process per routing configurations in props.conf and transforms.conf. Data is first parsed and processed by Splunk pipelines and then by Cribl. By default all data except internal indexes are routed out to Cribl right after the Typing pipeline. After processed by Cribl, they're sent to one or more Splunk receivers downstream (typically indexers).

Installing the Cribl Splunk App

  • Select an instance where to install, e.g., a Heavy Forwarder (ADVISED)
  • Ensure that ports 10000 and 9000 are available. See here.
  • Get the bits here and install as a regular Splunk app.
  • Go to https://<instance>:9000 and login with that instance's Splunk admin role credentials.

Deployment Options


There are two deployment options for Cribl in a Splunk environment and they both involve heavy forwarders or indexers. Windows and Linux are both supported only in Option A.

Option A: Deploying Cribl Splunk App on a Splunk Heavy Forwarder (ADVISED)
Option B: Deploying Cribl Standalone on a Splunk Indexer

Note about Splunk warnings

If you come across messages similar to below, on startup, or in logs:
Invalid value in stanza [route2criblQueue]/[hecCriblQueue] in /opt/splunk/etc/apps/cribl/default/transforms.conf, line 11: (key: DEST_KEY, value: criblQueue) / line 24: (key: DEST_KEY, value: $1)
please ignore them. They are benign warns.

Option A. Deploying Cribl Splunk App on a Splunk Heavy Forwarder


Cribl can natively accept data streams (un-broken events) or events from sources. In this case, the HF will deliver events locally to Cribl which processes them then sends them downstream. When receivers are Splunk indexers Cribl can also load balance across them.

1. Relevant configurations

When Cribl is installed as an app on a Splunk Heavy Forwarder, these are the relevant sections in configuration files that ship by default that enable Splunk to send data to Cribl.

[tcpout]
disabled = false 
defaultGroup = cribl

[tcpout:cribl]
server=127.0.0.1:10000
sendCookedData=true
useACK = false
negotiateNewProtocol = false
negotiateProtocolLevel = 0
[splunktcp]
route=has_key:_replicationBucketUUID:replicationQueue;has_key:_dstrx:typingQueue;has_key:__CRIBBLED:indexQueue;has_key:_linebreaker:criblQueue;absent_key:_linebreaker:parsingQueue
[route2cribl]
SOURCE_KEY = _MetaData:Index
REGEX = ^[^_]
DEST_KEY = _TCP_ROUTING
FORMAT = cribl 

[route2criblQueue]
SOURCE_KEY = _MetaData:Index
REGEX = ^[^_]
DEST_KEY = queue
FORMAT = criblQueue
[default]
TRANSFORMS-cribl = route2criblQueue, route2cribl

Configuring Cribl with a subset of your data

The props.conf stanza above will apply the above transforms to everything. Depending on your requirements you may want to target a subset of your sources, sourcetypes or hosts. For example, the diagram below shows the effective configurations of outputs.conf, props.conf and transforms.conf to send <bluedata> events thru Cribl.

2. Configure Cribl to send data to Splunk

To send data from Cribl to a set of Splunk indexers, use the Cribl UI to go to Destinations | Splunk Load Balanced and enter the required information.

Option B: Deploying Cribl Standalone on a Splunk Indexer


Cribl can natively accept data streams (un-broken events) or events from sources. In this case, data comes directly into Cribl which processes it then sends it downstream, including the local Splunk indexer instance. This is exactly like a Standalone Deployment but using a Splunk Indexer instance as the host.

Splunk App Deployment


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.