Cribl LogStream – Docs

Cribl LogStream Documentation

Questions? We'd love to help you! Meet us in #Cribl Community Slack (sign up here)
Download entire manual as PDF - v2.4.4

Distributed Deployment

Getting started with Cribl LogStream on a distributed deployment

Distributed Deployment

To sustain higher incoming data volumes, and/or increased processing, you can scale from a single instance up to a multi-instance, distributed deployment. Instances in the deployment independently serve all inputs, process events, and send to outputs.

The instances are managed centrally by a single Master Node, which is responsible for keeping configurations in sync, and for tracking and monitoring the instances' activity metrics.

👍

For some use cases for distributed deployments, see Worker Groups – What Are They and Why You Should Care.

Concepts

Single Instance – a single Cribl LogStream instance, running as a standalone (not distributed) installation on one server.

Master Node – a LogStream instance running in Master mode, used to centrally author configurations and monitor Worker Nodes in a distributed deployment.

Worker Node – a LogStream instance running as a managed Worker, whose configuration is fully managed by a Master Node. (By default, will poll the master for configuration changes every 10 seconds.)

Worker Group – a collection of Worker Nodes that share the same configuration. You map Nodes to a Worker Group using a Mapping Ruleset.

Worker Process – a Linux process within a Single Instance, or within Worker Nodes, that handles data inputs, processing, and output. The process count is constrained by the number of physical or virtual CPUs available; for details, see Sizing and Scaling.

Mapping Ruleset – an ordered list of filters, used to map Workers Nodes into Worker Groups.

🚧

A Worker Node's local running config can be manually overridden/changed, but changes won't persist on the filesystem. To permanently modify a Worker Node's config, save, commit, and deploy it from the Master. See Deploying Configurations below.

LogStream 2.4 introduces role-based access control, at the Worker Group level. Users will be able to access Workers only within those Worker Groups on which they've been granted access.

Aggregating Workers

To clarify how the above concepts add up hierarchically, let's use a military metaphor involving toy soldiers:

  • Worker Process = soldier.
  • Worker Node = multiple Worker Processes = squad.
  • Worker Group = multiple Worker Nodes = platoon.

Multiple Worker Groups are very useful in meeting organizational or geographic constraints reflected in configuration. E.g., you might have a U.S. Worker Group with certain TLS certificates and output settings, versus and APAC Worker Group and an EMEA Worker Group that each have distinct certs and settings.

Architecture

This is an overview of a distributed LogStream deployment's components.

Distributed deployment architecture

Master Node Requirements

  • OS:

    • Linux: RedHat, CentOS, Ubuntu, AWS Linux (64bit)
  • System:

    • +4 physical cores, +8GB RAM
    • 5GB free disk space
  • Git: git must be available on the Master Node. See details below.

  • Browser Support: Firefox 65+, Chrome 70+, Safari 12+, Microsoft Edge

📘

We assume that 1 physical core is equivalent to 2 virtual/hyperthreaded CPUs (vCPUs). All quantities listed above are minimum requirements.

🚧

Mac OS is no longer supported as of v. 2.3, due to LogStream's incorporation of Linux-native features.

Worker Node Requirements

See Single-Instance Deployment for requirements and Sizing and Scaling for capacity planning details.

Network Ports – Master Node

In a distributed deployment, Workers communicate with the Master Node on these ports. Ensure that the Master is reachable on those ports from all Workers.

Component

Default Port

Heartbeat

4200

Network Ports – Worker Nodes

By default, all LogStream Worker instances listen on the following ports:

Component

Default Port

UI

9000

User options                            

  • Other data ports as required.

Installing on Linux

See Single-Instance Deployment, as the installation procedures are identical.

Version Control with git

LogStream requires git (version 1.8.3.1 or higher) to be available locally on the host where the Master Node will run. Configuration changes must be committed to git before they're deployed.

If you don't have git installed, check here for details on how to get started.

The Master node uses git to:

  • Manage configuration versions across Worker Groups.
  • Provide users with an audit trail of all configuration changes.
  • Allow users to display diffs between current and previous config versions.

Setting up Master and Worker Nodes

1. Configuring a Master Node

You can configure a Master Node through the UI, through the instance.yml config file, or through the command line.

Using the UI

In Settings > Distributed Settings > Distributed Management > General Settings, select Mode Master. Supply the required Master settings (Address and Port). Customize the optional settings if desired. Then click Save to restart.

👍

Worker UI Access

If you enable the nearby Distributed Settings > Master Settings > Worker UI access option (enabledWorkerRemoteAccess key), you will be able to click through from the Master's Manage Worker Nodes screen to an authenticated view of each Worker's UI. An orange header labeled Viewing Worker: <host/GUID> will appear to confirm that you are remotely viewing a Worker's UI.

Worker UI access

Using YAML Config File

In $CRIBL_HOME/local/_system/instance.yml, under the distributed section, set mode to master:

distributed:
  mode: master
  master:
    host: <IP or 0.0.0.0>
    port: 4200
    tls:
      disabled: true
    ipWhitelistRegex: /.*/
    authToken: <auth token>
    enabledWorkerRemoteAccess: false
    compression: none
    connectionTimeout: 5000
    writeTimeout: 10000

Using the Command Line

You can configure a Master Node using a CLI command of this form:

./cribl mode-master [options] [args]

For all options, see the CLI Reference.

2. Configuring a Worker Node

On each LogStream instance you designate as a Worker Node, you can configure the Worker through the UI, the instance.yml config file, environment variables, or the command line.

Using the UI

In Settings > Distributed Settings > Distributed Management > General Settings, select Mode Worker. Supply the required Master settings (Address and Port). Customize the optional settings if desired. Then click Save to restart.

Using YAML Config File

In $CRIBL_HOME/local/_system/instance.yml, under the distributed section, set mode to worker:

distributed:
  mode: worker
  envRegex: /^CRIBL_/
  master:
    host: <master address>
    port: 4200
    authToken: <token here>
    compression: none
    tls:
      disabled: true
    connectionTimeout: 5000
    writeTimeout: 10000
  tags:
       - tag1
       - tag2
       - tag42
  group: teamsters

Using Environment Variables

You can configure Worker Nodes via environment variables, as in this example:

CRIBL_DIST_MASTER_URL=tcp://[email protected]:4203 ./cribl start

See the Environment Variables section for more details.

Using the Command Line

You can configure a Worker Node using CLI commands of this form:

./cribl mode-worker -H <master-hostname-or-IP> -p <port> [options] [args]

The -H and -p parameters are required. For other options, see the CLI Reference. Here is an example command:

./cribl mode-worker -H 192.0.2.1 -p 4200 -u myAuthToken

LogStream will need to restart after this command is issued.

Menu Changes in Distributed Mode

Compared to a single-instance deployment, deploying in distributed mode changes LogStream's menu structure in a few ways. The top menu adds Worker Groups, Workers, and Mappings tabs – all to manage Workers and their assignments.

Distributed deployment: menu structure

If you have a LogStream Free or LogStream One license, the Worker Groups tab instead reads Default Group, because these license types allow only this single group. Therefore, throughout this documentation, interpret any reference to the "Worker Groups tab" as "Default Group tab" in your installation.

Distributed deployment with LogStream Free/One license

To access the Data (drop-down), Routes, Pipelines, and Knowledge items on the light-colored submenu shown above, click the Worker Groups tab, then click into your desired Worker Group to display its submenu. This submenu also adds a System Settings tab, through which you can manage configuration per Worker Group.

(With a LogStream Free or LogStream One license, you'd click the Default Group tab, whose System Settings submenu tab configures only that single group.)

For comparison, here is a single-instance deployment's single-level top menu:

Single-instance deployment: single-level menu

🚧

This repositioning of Data, Routes, Pipelines, and Knowledge tabs to the Worker Groups (or Default Group) submenu also applies to several instructions and screenshots that you'll see throughout this documentation.

Where procedures are written around a single-instance scenario, just click into your appropriate Worker Group to access the same tabs on its submenu.

How Do Workers and Master Work Together

The Master Node has two primary roles:

  1. Serves as a central location for Workers' operational metrics. The Master ships with a monitoring console that has a number of dashboards, covering almost every operational aspect of the deployment.

  2. Serves as a central location for authoring, validating, deploying, and synchronizing configurations across Worker Groups.

Master Node/Worker Nodes relationship

Network Port Requirements (Defaults)
  • UI access to Master Node: TCP 9000.
  • Worker Node to Master Node: TCP 4200 (Heartbeat/Metrics/other).
Master/Worker Node Communication

Workers will periodically (every 10 seconds) send a heartbeat to the Master. This heartbeat includes information about themselves, and a set of current system metrics. The heartbeat payload includes facts – such as hostname, IP address, GUID, tags, environment variables, current software/configuration version, etc. – that the Master tracks with the connection.

The failure of a Worker Node to successfully send two consecutive heartbeat messages to the Master will cause the respective Worker to be removed from the Workers page in the Master's UI until the Master receives a heartbeat message from the affected Worker.

When a Worker Node checks in with the Master:

  • The Worker sends heartbeat to Master.
  • The Master uses the Worker’s facts and Mapping Rules to map it to a Worker Group.
  • The Worker Node pulls its Group's updated configuration bundle, if necessary.

Config Bundle Management

Config bundles are compressed archives of all config files and associated data that a Worker needs to operate. The Master creates bundles upon Deploy, and manages them as follows:

  • Bundles are wiped clean on startup.
  • While running, at most 5 bundles per group are kept.
  • Bundle cleanup is invoked when a new bundle is created.

The Worker pulls bundles from the Master and manages them as follows:

  • Last 5 bundles and backup files are kept.
  • At any point in time, all files created in the last 10 minutes are kept.
  • Bundle cleanup is invoked after a reconfigure.

Worker Groups

Worker Groups facilitate authoring and management of configuration settings for a particular set of Workers. To create a new Worker Group, go to the Worker Groups top-level menu and click + Add New.

Configuring a Worker Group

Click on the newly created Group to display an interface for authoring and validating its configuration. You can configure everything for this Group as if it were a single Cribl LogStream instance – using exactly the same visual interface for Routes, Pipelines, Sources, Destinations and System Settings.

🚧

Can't Log into the Worker Node as Admin User?

To explicitly set passwords for Worker Groups, see User Authentication.

Mapping Workers to Worker Groups

Mapping Rulesets are used to map Workers to Worker Groups. Within a ruleset, a list of rules evaluate Filter expressions on the information that Workers send to the Master.

Only one Mapping Ruleset can be active at any one time, although a ruleset can contain multiple rules. At least one Worker Group should be defined and present in the system.

The ruleset behavior is similar to Routes, where the order matters, and the Filter section supports full JS expressions. The ruleset matching strategy is first-match, and one Worker can belong to only one Worker Group.

Creating a Mapping Ruleset

To create a Mapping Ruleset, start on the Mappings top-level menu, then click + Add New.

📘

The Mappings top-level menu appears only when you have started LogStream with Distributed Settings > Mode set to Master.

Click on the newly created item, and start adding rules by clicking on + Add Rule. While you build and refine rules, the Preview in the right pane will show which currently reporting and tracked workers map to which Worker Groups.

A ruleset must be activated before it can be used by the Master. To activate it, go to Mappings and click Activate on the required ruleset. The Activate button will then change to an Active toggle. Using the adjacent buttons, you can also Configure or Delete a ruleset, or Clone a ruleset if you'd like to work on it offline, test different filters, etc.

Although not required, Workers can be configured to send a Group with their payload. See below how this ranks in mapping priority.

Add a Mapping Rule – Example

Within a Mapping Ruleset, click + Add Rule to define a new rule. Assume that you want to define a rule for all hosts that satisfy this set of conditions:

  • IP address starts with 10.10.42, AND:
  • More than 6 CPUs OR CRIBL_HOME environment variable contains w0, AND:
  • Belongs to Group420.

Rule Configuration

  • Rule Name: myFirstRule
  • Filter: (conn_ip.startsWith('10.10.42.') && cpus > 6) || env.CRIBL_HOME.match('w0')
  • Group: Group420

Default Worker Group and Mapping

When a LogStream instance runs as Master, the following are created automatically:

  • A default Worker Group.
  • A default Mapping Ruleset,
    • with a default Rule matching all (true).

Mapping Order of Priority

Priority for mapping to a group is as follows: Mapping Rules > Group sent by Worker > default Group.

  • If a Filter matches, use that Group.
  • Else, if a Worker has a Group defined, use that.
  • Else, map to the default Group.

Deploying Configurations

Your typical workflow for deploying LogStream configurations is the following:

  1. Work on configs.
  2. Save your changes.
  3. Commit (and optionally push).
  4. Deploy.

Deployment is the last step after configuration changes have been saved and committed. Deploying here means propagating updated configs to Workers. You deploy new configurations at the Group level: Locate your desired Group and click on Deploy. Workers that belong to the group will start pulling updated configurations on their next check-in with the Master.

🚧

Can't Log into the the Worker Node as Admin User?

When a Worker Node pulls its first configs, the admin password will be randomized, unless specifically changed. This means that users won't be able to log in on the Worker Node with default credentials. For details, see User Authentication.

Configuration Files

On the Master, a Worker Group's configuration lives under:
$CRIBL_HOME/groups/<groupName>/local/cribl/.

On the managed Worker, after configs have been pulled, they're extracted under: $CRIBL_HOME/local/cribl/.

Lookup Files

On the Master, a Group's lookup files live under: $CRIBL_HOME/groups/<groupName>/data/lookups.

On the managed Worker, after configs have been pulled, lookups are extracted under: $CRIBL_HOME/data/lookups. When deployed via the Master, lookup files are distributed to Workers as part of a configuration deployment.

If you want your lookup files to be part of the LogStream configuration's version control process, we recommended deploying using the Master Node. Otherwise, you can update your lookup file out-of-band on the individual Workers. The latter is especially useful for larger lookup files ( > 10 MB, for example), or for lookup files maintained using some other mechanism, or for lookup files that are updated frequently.

For other options, see Managing Large Lookups.

📘

Some configuration changes will require restarts, while many others require only reloads. See here for details.

Restarts/reloads of each Worker Process are handled automatically by the Worker. Note that individual Worker Nodes might temporarily disappear from the Master's Workers tab while restarting.

Worker Process Rolling Restart

During a restart, to minimize ingestion disruption and increase availability of network ports, Worker Processes on a Worker Node are restarted in a rolling fashion. 20% of running processes – with a minimum of one process – are restarted at a time. A Worker Process must come up and report as started before the next one is restarted. This rolling restart continues until all processes have restarted. If a Worker Process fails to restart, configurations will be rolled back.

Auto-Scaling Workers and Load-Balancing Incoming Data

If data flows in via Load Balancers, make sure to register all instances. Each Cribl LogStream node exposes a health endpoint that your Load Balancer can check to make a data/connection routing decision.

Health Check Endpoint

Healthy Response

curl http://<host>:<port>/api/v1/health

{"status":"healthy"}

Environment Variables

  • CRIBL_DIST_MASTER_URL – URL of the Master Node. Format: <tls|tcp>://<authToken>@host:port?group=defaultGroup&tag=tag1&tag=tag2&tls.<tls-settings below>.
    • tls.privKeyPath – Private Key Path.
    • tls.passphrase – Key Passphrase.
    • tls.caPath – CA Certificate Path.
    • tls.certPath – Certificate Path.
    • tls.rejectUnauthorized – Validate Client Certs. Boolean, defaults to false.
    • tls.requestCert – Authenticate Client (mutual auth). Boolean, defaults to false.
    • tls.commonNameRegex – Regex matching peer certificate > subject > common names allowed to connect. Used only if tls.requestCert is set to true.
  • CRIBL_DIST_MODE – worker | master. Defaults to worker iff CRIBL_DIST_MASTER_URL is present.
  • CRIBL_HOME – Auto setup on startup. Defaults to parent of bin directory.
  • CRIBL_CONF_DIR – Auto setup on startup. Defaults to parent of bin directory.
  • CRIBL_NOAUTH – Disables authentication. Careful here!!
  • CRIBL_VOLUME_DIR – Sets a directory that persists modified data between different containers or ephemeral instances.

Deprecated Variables

These will be removed as of LogStream 3.0:

  • CRIBL_CONFIG_LOCATION.
  • CRIBL_SCRIPTS_LOCATION.

Workers GUID

When you install and first run the software, a GUID is generated and stored in a .dat file located in CRIBL_HOME/bin/, e.g.:

# cat CRIBL_HOME/bin/676f6174733432.dat
{"it":1570724418,"phf":0,"guid":"48f7b21a-0c03-45e0-a699-01e0b7a1e061"}

When deploying Cribl LogStream as part of a host image or VM, be sure to remove this file, so that you don't end up with duplicate GUIDs. The file will be regenerated on next run.

Updated 4 days ago

Distributed Deployment


Getting started with Cribl LogStream on a distributed deployment

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.