Cribl LogStream – Docs

Cribl LogStream Documentation

Questions? We'd love to help you! Meet us in #Cribl Community Slack (sign up here)
Download entire manual as PDF – v.3.1.1

Distributed Deployment

Getting started with Cribl LogStream on a distributed deployment

To sustain higher incoming data volumes, and/or increased processing, you can scale from a single instance up to a multi-instance, distributed deployment. The instances are managed centrally by a single Leader Node, which is responsible for keeping configurations in sync, and for tracking and monitoring the instances' activity metrics.

👍

For some use cases for distributed deployments, see Worker Groups – What Are They and Why You Should Care.

As of version 3.0, LogStream's former "master" application components are renamed "leader." While some legacy terminology remains within CLI commands/​options, configuration keys/values, and environment variables, this document will reflect that.

Concepts

Single Instance – a single Cribl LogStream instance, running as a standalone (not distributed) installation on one server.

Leader Node – a LogStream instance running in Leader mode, used to centrally author configurations and monitor Worker Nodes in a distributed deployment.

Worker Node – a LogStream instance running as a managed Worker, whose configuration is fully managed by a Leader Node. (By default, will poll the Leader for configuration changes every 10 seconds.)

Worker Group – a collection of Worker Nodes that share the same configuration. You map Nodes to a Worker Group using a Mapping Ruleset.

Worker Process – a Linux process within a Single Instance, or within Worker Nodes, that handles data inputs, processing, and output. The process count is constrained by the number of physical or virtual CPUs available; for details, see Sizing and Scaling.

Mapping Ruleset – an ordered list of filters, used to map Workers Nodes into Worker Groups.

🚧

Options and Constraints

A Worker Node's local running config can be manually overridden/changed, but changes won't persist on the filesystem. To permanently modify a Worker Node's config: Save, commit, and deploy it from the Leader. See Deploying Configurations below.

With an Enterprise license, you can configure role-based access control at the Worker Group level. Non-administrator users will then be able to access Workers only within those Worker Groups on which they're authorized.

Aggregating Workers

To clarify how the above concepts add up hierarchically, let's use a military metaphor involving toy soldiers:

  • Worker Process = soldier.
  • Worker Node = multiple Worker Processes = squad.
  • Worker Group = multiple Worker Nodes = platoon.

Multiple Worker Groups are very useful in making your configuration reflect organizational or geographic constraints. E.g., you might have a U.S. Worker Group with certain TLS certificates and output settings, versus an APAC Worker Group and an EMEA Worker Group, each with their own distinct certs and settings.

Architecture

This is an overview of a distributed LogStream deployment's components.

Distributed deployment architectureDistributed deployment architecture

Distributed deployment architecture

Here is the division of labor among components of the Leader Node and Worker Node.

Leader Node

  • API Process – Handles all the API interactions.

  • N Config Helpers – One process per Worker Group. Helps with maintaining configs, previews, etc.

Worker Node

  • API Process – Handles communication with the Leader Node (i.e., with its API Process) and handles other API requests.

  • N Worker Processes – Handle all the data processing.

Single-Instance Architecture

For comparison, here's the simpler division of labor on a single-instance deployment, where the separate Leader versus Worker Nodes are essentially condensed into one stack:

  • API Process – Handles all the API interactions.

  • N Worker Process – Handle all data processing,

  • One of the Worker Processes is called the leader Worker Process. (Not to be confused with the Leader Node.) This is responsible for writing configs to disk, in addition to data processing.

So here, the API Process handles the same responsibilities as a Leader Node's API Process, while the Worker Processes correspond to the Worker Nodes' Worker Processes. The exception is that one Worker Process does double duty, also filling in for one of the Leader Node's Config Helpers.

Leader Node Requirements

  • OS:

    • Linux: RedHat, CentOS, Ubuntu, AWS Linux (64bit)
  • System:

    • +4 physical cores, +8GB RAM
    • 5GB free disk space
  • Git: git must be available on the Leader Node. See details below.

  • Browser Support: Firefox 65+, Chrome 70+, Safari 12+, Microsoft Edge

📘

We assume that 1 physical core is equivalent to 2 virtual/hyperthreaded CPUs (vCPUs). All quantities listed above are minimum requirements.

🚧

Mac OS is no longer supported as of v. 2.3, due to LogStream's incorporation of Linux-native features.

Worker Node Requirements

See Single-Instance Deployment for requirements and Sizing and Scaling for capacity planning details.

Network Ports – Leader Node

In a distributed deployment, Workers communicate with the Leader Node on these ports. Ensure that the Leader is reachable on those ports from all Workers.

Component

Default Port

Heartbeat

4200

Network Ports – Worker Nodes

By default, all LogStream Worker instances listen on the following ports:

Component

Default Port

UI

9000

User options                            

  • Other data ports as required.

Installing on Linux

See Single-Instance Deployment, as the installation procedures are identical.

Version Control with git

LogStream requires git (version 1.8.3.1 or higher) to be available locally on the host where the Leader Node will run. Configuration changes must be committed to git before they're deployed.

If you don't have git installed, check here for details on how to get started.

The Leader node uses git to:

  • Manage configuration versions across Worker Groups.
  • Provide users with an audit trail of all configuration changes.
  • Allow users to display diffs between current and previous config versions.

Setting up Leader and Worker Nodes

1. Configuring a Leader Node

You can configure a Leader Node through the UI, through the instance.yml config file, or through the command line.

Using the UI

In global ⚙️ Settings (lower left) > Distributed Settings > Distributed Management > General Settings, select Mode: Leader.

Next, on the Leader Settings left tab, confirm or enter the required Leader settings (Address and Port). Customize the optional settings if desired. Then click Save to restart.

Worker UI Access

If you enable the nearby global ⚙️ Settings > Distributed Settings > Leader Settings > Worker UI access option (which corresponds to the enabledWorkerRemoteAccess key), you will be able to click through from the Leader's Manage Worker Nodes screen to an authenticated view of each Worker's UI. An orange header labeled Viewing Worker: <host/GUID> will be added, to confirm that you are remotely viewing a Worker's UI.

Worker UI accessWorker UI access

Worker UI access

Using YAML Config File

In $CRIBL_HOME/local/_system/instance.yml, under the distributed section, set mode to master:

distributed:
  mode: master
  master:
    host: <IP or 0.0.0.0>
    port: 4200
    tls:
      disabled: true
    ipWhitelistRegex: /.*/
    authToken: <auth token>
    enabledWorkerRemoteAccess: false
    compression: none
    connectionTimeout: 5000
    writeTimeout: 10000

Using the Command Line

You can configure a Leader Node using a CLI command of this form:

./cribl mode-master [options] [args]

For all options, see the CLI Reference.

2. Configuring a Worker Node

On each LogStream instance you designate as a Worker Node, you can configure the Worker through the UI, the instance.yml config file, environment variables, or the command line.

Using the UI

In global ⚙️ Settings (lower left) > Distributed Settings > Distributed Management > General Settings, select Mode: Worker.

Next, confirm or enter the required Leader settings (Address and Port). Customize the optional settings if desired. Then click Save to restart.

Using YAML Config File

In $CRIBL_HOME/local/_system/instance.yml, under the distributed section, set mode to worker:

distributed:
  mode: worker
  envRegex: /^CRIBL_/
  master:
    host: <master address>
    port: 4200
    authToken: <token here>
    compression: none
    tls:
      disabled: true
    connectionTimeout: 5000
    writeTimeout: 10000
  tags:
       - tag1
       - tag2
       - tag42
  group: teamsters

Using Environment Variables

You can configure Worker Nodes via environment variables, as in this example:

CRIBL_DIST_MASTER_URL=tcp://[email protected]:4203 ./cribl start

See the Environment Variables section for more details.

Using the Command Line

You can configure a Worker Node using CLI commands of this form:

./cribl mode-worker -H <master-hostname-or-IP> -p <port> [options] [args]

The -H and -p parameters are required. For other options, see the CLI Reference. Here is an example command:

./cribl mode-worker -H 192.0.2.1 -p 4200 -u myAuthToken

LogStream will need to restart after this command is issued.

Menu Changes in Distributed Mode

Compared to a single-instance deployment, deploying in distributed mode changes LogStream's menu structure in a few ways. The left nav adds Leader Mode, Groups, and Workers tabs – all to manage Workers and their assignments. Also, the global Monitoring link moves from the top to the left nav.

Distributed deployment: menu structureDistributed deployment: menu structure

Distributed deployment: menu structure

To access the Group-specific top nav shown above, click Groups, then click into your desired Worker Group. This contextual top nav also adds a Settings tab, through which you can manage configuration per Worker Group.

If you have a LogStream Free or LogStream One license and use distributed mode, the left nav's Groups link instead reads Configure, because these license types allow only one group. Therefore, throughout this documentation, interpret any reference to the "Groups link" as "Configure link" in your installation. Here, the top nav's added Settings link opens configuration specific to the same default group.

Distributed deployment with LogStream Free/One licenseDistributed deployment with LogStream Free/One license

Distributed deployment with LogStream Free/One license

For comparison, here is a single-instance deployment's consolidated top-menu structure:

Single-instance deployment: anchored top menuSingle-instance deployment: anchored top menu

Single-instance deployment: anchored top menu

Managing Worker Nodes

If you have an Enterprise or Standard license, clicking the left nav's Workers tab opens a Manage Worker Nodes page with two upper tabs. The Workers tab provides status information for each Worker Node in the selected Worker Group. You can expand each Node's row to display additional details and controls.

Workers > Worker Nodes status/controlsWorkers > Worker Nodes status/controls

Workers > Worker Nodes status/controls

Click the Mappings tab to display status and controls for the active Mapping Ruleset:

Workers > Mappings status/controlsWorkers > Mappings status/controls

Workers > Mappings status/controls

Click into a Ruleset to manage and preview its contained Rules:

Managing Ruleset pageManaging Ruleset page

Managing Ruleset page

🚧

Distributed mode's repositioning of navigation/menu links also applies to several instructions and screenshots that you'll see throughout this documentation.

Where procedures are written around a single-instance scenario, just click into your appropriate Group to access the corresponding navigation links.

How Do Workers and Leader Work Together

The Leader Node has two primary roles:

  1. Serves as a central location for Workers' operational metrics. The Leader ships with a monitoring console that has a number of dashboards, covering almost every operational aspect of the deployment.

  2. Serves as a central location for authoring, validating, deploying, and synchronizing configurations across Worker Groups.

Leader Node/Worker Nodes relationshipLeader Node/Worker Nodes relationship

Leader Node/Worker Nodes relationship

Network Port Requirements (Defaults)
  • UI access to Leader Node: TCP 9000.
  • Worker Node to Leader Node: TCP 4200 (Heartbeat/Metrics/other).
Leader/Worker Node Communication

Workers will periodically (every 10 seconds) send a heartbeat to the Leader. This heartbeat includes information about themselves, and a set of current system metrics. The heartbeat payload includes facts – such as hostname, IP address, GUID, tags, environment variables, current software/configuration version, etc. – that the Leader tracks with the connection.

The failure of a Worker Node to successfully send two consecutive heartbeat messages to the Leader will cause the respective Worker to be removed from the Workers page in the Leader's UI until the Leader receives a heartbeat message from the affected Worker.

When a Worker Node checks in with the Leader:

  • The Worker sends heartbeat to Leader.
  • The Leader uses the Worker’s facts and Mapping Rules to map it to a Worker Group.
  • The Worker Node pulls its Group's updated configuration bundle, if necessary.

Config Bundle Management

Config bundles are compressed archives of all config files and associated data that a Worker needs to operate. The Leader creates bundles upon Deploy, and manages them as follows:

  • Bundles are wiped clean on startup.
  • While running, at most 5 bundles per group are kept.
  • Bundle cleanup is invoked when a new bundle is created.

The Worker pulls bundles from the Leader and manages them as follows:

  • Last 5 bundles and backup files are kept.
  • At any point in time, all files created in the last 10 minutes are kept.
  • Bundle cleanup is invoked after a reconfigure.

Worker Groups

Worker Groups facilitate authoring and management of configuration settings for a particular set of Workers. To create a new Worker Group, click Groups from the left nav and, from the resulting Manage Groups page, click + Add New.

👍

Configuring multiple Worker Groups requires a LogStream Enterprise or Standard license, and configuring or more than 10 Worker Processes requires at least a LogStream One license.

Configuring a Worker Group

Click on newly created Group's Configure button to display an interface for authoring and validating its configuration. You can configure everything for this Group as if it were a single LogStream instance – using a similar visual interface for Routes, Pipelines, Sources, Destinations, and Group-specific Settings.

🚧

Can't Log into the Worker Node as Admin User?

To explicitly set passwords for Worker Groups, see User Authentication.

Mapping Workers to Worker Groups

Mapping Rulesets are used to map Workers to Worker Groups. Within a ruleset, a list of rules evaluate Filter expressions on the information that Workers send to the Leader.

Only one Mapping Ruleset can be active at any one time, although a ruleset can contain multiple rules. At least one Worker Group should be defined and present in the system.

The ruleset behavior is similar to Routes, where the order matters, and the Filter section supports full JS expressions. The ruleset matching strategy is first-match, and one Worker can belong to only one Worker Group.

Creating a Mapping Ruleset

To create a Mapping Ruleset, Click Mappings from the left nav and then, from the resulting Manage Mapping Rulesets page, click + Add New. Give the resulting New Ruleset* a unique ID and click Save**.

📘

The Mappings left-nav link appears only when you have started LogStream with global ⚙️ Settings (lower left) > Distributed Settings > Mode set to Leader.

On the resulting Manage Mapping Rulesets page, click your new ruleset's Configure button, and start adding rules by clicking on + Rule. While you build and refine rules, the Preview in the right pane will show which currently reporting and tracked workers map to which Worker Groups.

A ruleset must be activated before it can be used by the Leader. To activate it, go to Mappings and click Activate on the required ruleset. The Activate button will then change to an Active toggle. Using the adjacent buttons, you can also Configure or Delete a ruleset, or Clone a ruleset if you'd like to work on it offline, test different filters, etc.

Although not required, Workers can be configured to send a Group with their payload. See below how this ranks in mapping priority.

Add a Mapping Rule – Example

Within a Mapping Ruleset, click + Add Rule to define a new rule. Assume that you want to define a rule for all hosts that satisfy this set of conditions:

  • IP address starts with 10.10.42, AND:
  • More than 6 CPUs OR CRIBL_HOME environment variable contains w0, AND:
  • Belongs to Group420.

Rule Configuration

  • Rule Name: myFirstRule
  • Filter: (conn_ip.startsWith('10.10.42.') && cpus > 6) || env.CRIBL_HOME.match('w0')
  • Group: Group420

Default Worker Group and Mapping

When a LogStream instance runs as Leader, the following are created automatically:

  • A default Worker Group.
  • A default Mapping Ruleset,
    • with a default Rule matching all (true).

Mapping Order of Priority

Priority for mapping to a group is as follows: Mapping Rules > Group sent by Worker > default Group.

  • If a Filter matches, use that Group.
  • Else, if a Worker has a Group defined, use that.
  • Else, map to the default Group.

Deploying Configurations

Your typical workflow for deploying LogStream configurations is the following:

  1. Work on configs.
  2. Save your changes.
  3. Commit (and optionally push).
  4. Deploy.

Deployment is the last step after configuration changes have been saved and committed. Deploying here means propagating updated configs to Workers. You deploy new configurations at the Group level: Locate your desired Group and click on Deploy. Workers that belong to the group will start pulling updated configurations on their next check-in with the Leader.

🚧

Can't Log into the the Worker Node as Admin User?

When a Worker Node pulls its first configs, the admin password will be randomized, unless specifically changed. This means that users won't be able to log in on the Worker Node with default credentials. For details, see User Authentication.

Configuration Files

On the Leader, a Worker Group's configuration lives under:
$CRIBL_HOME/groups/<groupName>/local/cribl/.

On the managed Worker, after configs have been pulled, they're extracted under: $CRIBL_HOME/local/cribl/.

Lookup Files

On the Leader, a Group's lookup files live under: $CRIBL_HOME/groups/<groupName>/data/lookups.

On the managed Worker, after configs have been pulled, lookups are extracted under: $CRIBL_HOME/data/lookups. When deployed via the Leader, lookup files are distributed to Workers as part of a configuration deployment.

If you want your lookup files to be part of the LogStream configuration's version control process, we recommended deploying using the Leader Node. Otherwise, you can update your lookup file out-of-band on the individual Workers. The latter is especially useful for larger lookup files ( > 10 MB, for example), or for lookup files maintained using some other mechanism, or for lookup files that are updated frequently.

For other options, see Managing Large Lookups.

📘

Some configuration changes will require restarts, while many others require only reloads. See here for details.

Restarts/reloads of each Worker Process are handled automatically by the Worker. Note that individual Worker Nodes might temporarily disappear from the Leader's Workers tab while restarting.

Worker Process Rolling Restart

During a restart, to minimize ingestion disruption and increase availability of network ports, Worker Processes on a Worker Node are restarted in a rolling fashion. 20% of running processes – with a minimum of one process – are restarted at a time. A Worker Process must come up and report as started before the next one is restarted. This rolling restart continues until all processes have restarted. If a Worker Process fails to restart, configurations will be rolled back.

Auto-Scaling Workers and Load-Balancing Incoming Data

If data flows in via Load Balancers, make sure to register all instances. Each Cribl LogStream node exposes a health endpoint that your Load Balancer can check to make a data/connection routing decision.

Health Check Endpoint

Healthy Response

curl http://<host>:<port>/api/v1/health

{"status":"healthy"}

Environment Variables

  • CRIBL_DIST_MASTER_URL – URL of the Leader Node. Format: <tls|tcp>://<authToken>@host:port?group=defaultGroup&tag=tag1&tag=tag2&tls.<tls-settings below>. Example: CRIBL_DIST_MASTER_URL=tls://<authToken>@leader:4200

    • tls.privKeyPath – Private Key Path.
    • tls.passphrase – Key Passphrase.
    • tls.caPath – CA Certificate Path.
    • tls.certPath – Certificate Path.
    • tls.rejectUnauthorized – Validate Client Certs. Boolean, defaults to false.
    • tls.requestCert – Authenticate Client (mutual auth). Boolean, defaults to false.
    • tls.commonNameRegex – Regex matching peer certificate > subject > common names allowed to connect. Used only if tls.requestCert is set to true.
  • CRIBL_DIST_MODE – worker | master. Defaults to worker iff CRIBL_DIST_MASTER_URL is present.

  • CRIBL_HOME – Auto setup on startup. Defaults to parent of bin directory.

  • CRIBL_CONF_DIR – Auto setup on startup. Defaults to parent of bin directory.

  • CRIBL_NOAUTH – Disables authentication. Careful here!!

  • CRIBL_VOLUME_DIR – Sets a directory that persists modified data between different containers or ephemeral instances.

Deprecated Variables

These were removed as of LogStream 3.0:

  • CRIBL_CONFIG_LOCATION.
  • CRIBL_SCRIPTS_LOCATION.

Workers GUID

When you install and first run the software, a GUID is generated and stored in a .dat file located in CRIBL_HOME/bin/, e.g.:

# cat CRIBL_HOME/bin/676f6174733432.dat
{"it":1570724418,"phf":0,"guid":"48f7b21a-0c03-45e0-a699-01e0b7a1e061"}

When deploying Cribl LogStream as part of a host image or VM, be sure to remove this file, so that you don't end up with duplicate GUIDs. The file will be regenerated on next run.

Updated 7 days ago

Distributed Deployment


Getting started with Cribl LogStream on a distributed deployment

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.