Cribl - Docs

Getting started with Cribl LogStream

Questions? We'd love to help you! Meet us in #cribl (sign up)
Download manual as PDF


Standalone Deployment

Deployment guide to get you started with Cribl

There are at least two key factors that will determine the type of Cribl deployment in your environment:

  • Amount of Incoming Data: This is defined as the amount of data planned to be ingested per unit of time. E.g. How many MB/s or GB/day?
  • Amount of Data Processing: This is defined as the amount of processing that will happen on incoming data. E.g. Is most data passing through and just being routed? Or are there a lot of transformations, regex extractions, field encryptions? Is there a need for heavy re-serialization?

When volume is low and/or amount of processing is light, you can get started with a single instance deployment. See performance considerations. To accomodate increased load, you will need to scale with multiple instances.

Single Instance Deployment

For small volume/light processing environments or for test and evaluation use-cases a single instance of Cribl may be sufficient to serve all inputs, processing of events and sending to outputs without needing any others. To implement a single instance Cribl deployment see below.



  • OS: Linux (RedHat, CentOS, Ubuntu, AWS Linux), MacOS/Darwin
  • System: +4CPUs, +4GB RAM

Note: 1 CPU here means a physical CPU core. I.e. 2 CPUs = 4 virtual/hyperthreaded CPUs

Installing Cribl on Linux/Mac

  • Select an instance where to install and get the Cribl package here.
  • Ensure that ports 10080 and 9000 are available. See here.
  • Un-tar in a directory of choice, say, /apps/
    • e.g., tar xvzf cribl-<version>-<build>-<arch>.tgz

Running Cribl

Go to $CRIBL_HOME directory - this is where the package was extracted e.g. /apps/cribl/ - and use .bin/ to:

  • Start: ./bin/ start [--force]
  • Stop: ./bin/ stop [--force]
  • Reload: ./bin/ reload [--force]
  • Restart: ./bin/ restart [--force]
  • Get status: ./bin/ status

Next, go to http://<hostname>:9000 and login with default credentials (admin:admin) to start configuring Cribl with Sources, Destinations or start creating Routes and Pipelines.

Change the admin password immediatly after your first login!

Distributed Deployment

To sustain higher incoming data volumes and/or increased processing you can scale from a single instance to a multi-instance distributed deployment. All instances in the deployment pool are identical in what they do - they serve all inputs, process events and send to outputs equally. I.e. there are no separate roles for each for these "tasks".


Installing and Running Cribl

Procedure is identical as in the single instance case (above).

Config Management

Configurations for Routes, Pipelines, Functions and every other setting are persisted on disk in configuration files. These text files are in the popular .yml format and are located under $CRIBL_HOME/(default|local)/cribl/. Configurations in local take full precedence over those in default (i.e. there is no layering) and all changes from the UI affect configurations in local only.

To ensure configuration files are syncronized across all Cribl instances, you can use your configuration management system of choice. General implementation steps:

  • Change config files directly, or use the UI of one of the Cribl instances to affect changes. E.g. edit functions, add pipelines etc.
  • Copy/Sync $CRIBL_HOME/local/cribl/directory to your config managment system.
  • Use your config management system to push to all other instances.

Note: Another directory that needs to be syncronized is $CRIBL_HOME/data/ - this contains samples and captures but more importantly lookup files.

For new configuration changes to take effect a reload or a restart may be necessary:

  • CLI reload: ./bin/ reload [--force]
    Reload after affecting configs files for: routes, pipelines and functions.
  • CLI restart : ./bin/ restart [--force]
    Restart after affecting configs files for: inputs, outputs and system.

Scaling and Load Balancing

As your needs increase you can expand and horizontally scale by adding more instances. If incoming data flows in via Load Balancers make sure to register all new instances. Each Cribl instance also exposes a health endpoint that your Load Balancer can check to make a data/connection routing decision.

Health Check Endpoint
Healthy Response

curl http://<host>:<port>/api/v1/health



Cribl's API/UI access can be secured by configuring SSL. You can use your own private keys and certs or you can generate a pair with OpenSSL:

openssl req -nodes -new -x509 -newkey rsa:2048 -keyout myKey.pem -out myCert.pem -days 420

This command will generate both a self-signed cert certified for 420 days and an unencrypted 2048 bit RSA private key.

Key and Cert can be configured via Settings > System Settings > API Server Settings. Alternatively, you can manually use privKeyPath and certPath attributes in the api section in local/cribl.yml. E.g.,

  port: 9000
  disabled : false
    disabled: false
    privKeyPath: /path/to/myKey.pem
    certPath: /path/to/myCert.pem


To get an operational posture of a single instance deployment the following can be used:

  • Stats Tab: exposes information about traffic in and out of the system. It tracks events, bytes, split by data fields over time.

  • Cribl.log: contains comprehensive information about the status of the instance, its inputs, outputs, pipelines, routes, functions and traffic metrics.

Monitoring a distributed deployment can be be done by forwarding Cribl's internal data to your preferred log and metrics monitoring solution. From there, you can create dashboards, run alerts and make operational decision. To send internal data out of Cribl, go to Sources and enable Cribl Internal. This will send cribl.log down the routes and pipelines just like another data source.

Standalone Deployment

Deployment guide to get you started with Cribl

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.