On This Page

Home / Reference Architectures/ Sources and Destinations/Destination Architecture

Destination Architecture

Cribl sends processed data to external systems using three primary Destination types: Streaming, Non-Streaming (Batch), and Internal. Understanding the architectural considerations for each, including deployment, scaling, and security, is essential for building a resilient and scalable data pipeline.

Streaming and Non-Streaming Destinations

The fundamental distinction between destination types in Cribl is whether they handle data in a continuous stream or in discrete batches.

Streaming Destinations are systems that are always available to receive data in real-time. Think of message queues like Kafka or analytics platforms like Splunk. When you send data to a streaming destination, Cribl expects an immediate, persistent connection.

  • Architectural Use Case: Use for any workflow that requires low-latency data delivery, such as real-time alerting, security monitoring, or live dashboarding.

Non-Streaming (Batch) Destinations are systems that ingest data in chunks or objects, such as cloud object stores (Amazon S3, Azure Blob). These collect data over a period of time, aggregated into a file, and then written to the destination.

  • Architectural Use Case: Use for archiving data in long-term storage, compliance, or future analysis. This is the most cost-effective way to store high-fidelity data.

Streaming Destinations

Streaming Destinations are systems that require persistent network connectivity to receive data in real-time. Architecting for these destinations involves ensuring network reliability and minimizing latency between Cribl Nodes and the downstream endpoints.

Architectural Considerations

ConsiderationDescription
DeploymentArchitect for reliable, low-latency network paths.
ScalingScale horizontally by increasing the number of destination endpoints and distributing the load via built-in load balancing. Use multiple Nodes to parallelize output.

Cribl.Cloud automatically manages Node scaling.
SecurityEnforce TLS for all external destinations. Use mutual TLS or API tokens for authentication where supported. Restrict destination ports using firewall rules and configure egress allowlists for outbound traffic.
System ReqsEnsure sufficient CPU and memory for high-throughput serialization and encryption. Enable persistent queues (PQ) for critical destinations and allocate local disk for PQ storage to buffer during outages.

Cribl-managed Nodes handle resource sizing and PQ disk management.
OtherMonitor PQ depth and output error metrics to detect backpressure from the destination. Tune batch sizes and flush intervals to optimize the balance between throughput and latency.

Non-Streaming Destinations

Non-Streaming Destinations use a local staging directory on each Node to batch events into files before uploading them. This model is highly scalable and cost-effective for archiving data.

Architectural Considerations

ConsiderationDescription
DeploymentThe staging directory must be on a reliable, high-throughput local disk. Ensure network egress bandwidth is sufficient for batch uploads to cloud object stores.

Cribl-managed Nodes handle their own staging directory.
ScalingScale by sharding data across multiple buckets or storage accounts. Use multiple Nodes with separate staging directories to avoid disk I/O contention. Monitor disk I/O and free space to prevent data loss.
SecurityUse IAM roles or scoped credentials for cloud storage access. Enforce TLS for all uploads. For on-prem NFS, restrict access to trusted hosts and use network segmentation.
System ReqsProvision sufficient disk space for both the staging directory and temporary files, sized based on batch size, event volume, and upload frequency. Optimize disks to reduce batch flush latency.

Cribl-managed Nodes handle their own disks.
ResilienceThese Destinations do not use persistent queues (PQ). Resilience depends entirely on the staging disk capacity. Upon restart, a Node will pace the upload of any leftover files to avoid overwhelming the destination. The staging directory must be persistent across restarts to allow Cribl to resume any incomplete batches.
OtherTune batching conditions (size, open time) for your data profile.

Internal Destinations

Internal Destinations move data Node to Node within Cribl and serve as the backbone for distributed, hybrid, and tiered architectures. They enable precise routing, low-latency intra-cluster transport, and secure cross-site delivery. You can absorb downstream pauses or outages with persistent queues (PQ), while multi-endpoint targets provide load distribution and failover. This section focuses on architectural planning across deployment types, resource requirements, and performance tradeoffs.

Cribl-managed Nodes scale resources and handle Node communication, persistent queues, and load balancing automatically.

Data Transport Destinations: Cribl HTTP and Cribl TCP

These Destinations manage the physical transport of data between Cribl Nodes. The choice between them is a primary architectural decision based on network topology and security requirements.

AspectCribl HTTP DestinationCribl TCP Destination
Primary Use CaseSecure, proxy-aware transport across network boundaries (cross-site, hybrid, on-prem to cloud).High-throughput, low-latency transport within a trusted, low round-trip-time (RTT) network (intra-site, same AZ).
SecuritySupports TLS and mTLS, enabling strong, certificate-based authentication. Ideal for zero-trust environments and traversing proxies.Supports TLS and mTLS, enabling strong, certificate-based authentication.
ResiliencePQ is highly recommended to absorb WAN latency and transient failures.PQ is supported and effective for handling brief downstream pauses or processing spikes.
PerformanceNetwork proxies can impact throughput.Higher potential throughput, making it ideal for performance-sensitive hot paths.

Routing Logic Destinations: Output Router, Default, and DevNull

These Destinations provide logical control over data flow rather than handling data transport themselves. They delegate their work to other configured components.

TypeArchitectural FunctionKey Considerations
Output RouterProvides conditional fan-out to multiple downstream Destinations based on event content. Enables A/B testing, data sharding, and targeted routing.It inherits all behaviors (PQ, backpressure, billing) from the selected downstream Destination. Monitor each target independently to prevent hidden hot spots.
DefaultActs as a deterministic fallback for events that do not match any explicit route in a Pipeline, preventing silent data loss.This component delegates all behavior to its configured target. Use carefully, as over-reliance can mask routing logic errors. Monitor its usage to detect misconfigurations.
DevNullExplicitly drops events, serving as a terminal sink. Used for load shedding during incidents, testing, or filtering non-critical data.It immediately removes backpressure but also causes permanent data loss. Manage its use by operational procedures and change controls.

Common Scenarios and Architectural Guidance

The following table outlines architectural patterns for common challenges, combining Destination types with best practices for deployment, scaling, and resilience.

ScenarioRecommended Components & ConfigurationArchitectural Rationale & Guidance
High-Performance Intra-Site ForwardingCribl TCP Destination → Cribl TCP SourceWhy: This pairing offers the lowest overhead for high-volume flows within a data center or availability zone.

Guidance: Ensure Nodes are in a low-RTT network (<10ms). Verify both ends are in Distributed mode and on compatible versions. Align TLS settings precisely. See Transfer Data Between Workspaces or Environments for more information.
Secure Cross-Site & Hybrid TransportCribl HTTP Destination → Cribl HTTP SourceWhy: HTTP/S with mTLS provides robust security for traversing untrusted networks or proxies. PQ is critical for resilience over the WAN.

Guidance: Size Node with sufficient CPU for TLS operations. Place PQ on fast local storage and size it to cover the expected duration of a network outage. See Transfer Data Between Workspaces or Environments for more information.
Cribl Edge Fleet to Cribl Stream HandoffCribl Edge (Cribl HTTP/TCP) → Cribl Stream (Cribl HTTP/TCP)Why: This standard pattern for tiered processing supports single-ingest billing.

Guidance: Use Cribl HTTP for geographically dispersed Cribl Edge Fleets. Use Cribl TCP if Edge Nodes and the Cribl Stream environment share a private, low-latency network. See Cribl Edge to Cribl Stream for more information.
Load Distribution & High AvailabilityOutput Router → Multiple Cribl HTTP/TCP DestinationsWhy: The Output Router enables sharding by data attributes, while multi-endpoint Destinations handle failover.

Guidance: Configure at least two endpoints per failure domain. Use health checks to ensure traffic is only sent to healthy Nodes. See Output Router and Load Balancing for more information.
Resilience & Failure Domain IsolationAll Destinations with persistent queues (PQ)Why: PQ decouples upstream processes from downstream failures and buffers data locally during an outage.

Guidance: Use site or region boundaries as failure domains. Deploy PQ on any Destination that crosses these boundaries to prevent a localized failure from causing a cascading, system wide outage. See Optimize Destination Persistent Queues for more information.

Learn more about Destinations