On This Page

Home / Reference Architectures/ Cribl Validated Architectures/ Operational Guardrails·Network Ingress Design Patterns

Network Ingress Design Patterns

Cribl Validated Architectures (CVA) prioritize ingress abstraction as a core design requirement. These guardrails ensure that the data plane remains decoupled from Source configurations, allowing for seamless horizontal scaling and fault tolerance. Standardize how traffic enters Worker Groups, whether on-prem, in Cribl.Cloud, or across Hybrid boundaries, so that maintenance and Node failures never require updates to upstream senders.

On-Prem Ingress: Scaling & TCP Pinning

For customer-managed Worker Groups, ingress design assumes Worker Nodes are interchangeable. Push-based Sources (such as Syslog, HEC) must never target individual Worker Node IPs.

  • Abstraction layer: Configure all Sources to send data to a Virtual IP (VIP) or external load balancer (LB). This allows you to add or remove Worker Nodes for maintenance or scaling without reconfiguring senders. For details, see Push Sources.
  • Infrastructure: Deploy a Network or Application LB (such as F5, HAProxy, NGINX, AWS NLB/ALB) in front of each Worker Group. Configure health checks against listening ports on Worker Nodes to route only to healthy nodes.
  • Preventing TCP pinning: To maximize utilization, disable session stickiness/persistence for syslog and long-lived TCP traffic. Use stateless algorithms like round-robin, least-connections, or non-sticky source-IP hash. For details, see Mitigate TCP Pinning.
  • Two-tier balancing: Rely on the LB to distribute flows across Worker Nodes, then enable the Cribl-native TCP load balancing (where available) to fan events out across individual Worker processes on each Worker Node.
  • UDP handling: For high-volume Syslog, keep UDP hops short and local. Use a local Collector (Cribl Edge or a small Worker Group) behind a VIP, then forward to downstream groups via TCP/HTTP. For details, see Syslog Best Practices.

Cribl.Cloud Ingress

In Cribl.Cloud deployments, ingress is exposed as a managed Fully Qualified Domain Name (FQDN) and IP set, serving as the VIP for the Worker Group.

  • Managed connectivity: Sources send directly to the Cribl.Cloud ingress FQDN. No customer-managed LB is required in front of Cribl.Cloud Worker Nodes. For details, see Understanding Cribl.Cloud Architectural Model.
  • Traffic distribution: The Cribl.Cloud data plane uses native load balancing and consistent hashing to prevent TCP pinning.
  • Egress requirements: Configure local firewalls to permit outbound traffic to documented Cribl.Cloud hostnames/IPs. For details, see Required Ports in Cribl.Cloud. If using corporate egress proxies, configure them without stickiness to avoid re-introducing TCP pinning above the Cribl-managed layer.
  • High-volume scaling: For multi-tenant or extreme volume cases, scale by deploying multiple Cribl.Cloud Worker Groups and distribute traffic via DNS-level routing, upstream LBs, or Source-side routing logic. For details, see Resource Sizing.

Hybrid Ingress Patterns

For hybrid deployments, treat each environment boundary as a distinct ingress domain with specific constraints:

  • On-prem segment: Align with on-prem standards by using non-sticky LBs for Worker Groups; pair this with Cribl TCP/HTTP or native TCP load balancing to distribute events evenly across all Worker processes. For details, see Data Transport & Routing.
  • Cloud transition: Configure on-prem Worker Nodes to serve as an aggregation tier, collecting local traffic and forwarding it to Cribl.Cloud ingress via secure, encrypted TLS. This transition relies on FQDN-based allow-lists, rather than static IPs, to accommodate the dynamic nature of cloud environments. All communication must follow hybrid connectivity and port guidance to ensure the on-prem Worker Nodes can successfully authenticate and reach the Cribl.Cloud Leader and data plane. For details, see Hybrid Deployment Architecture Planning.
  • Cribl.Cloud egress: Once processed, Cribl.Cloud Worker Nodes deliver data to final Destinations (such as SaaS platforms, object storage, SIEMs) using the preferred protocol. Because these Worker Nodes reside in a managed cloud environment, you must apply provider-specific best practices for successful delivery. This means, configuring Destination allow-lists for Cribl.Cloud egress IP addresses, enforcing TLS encryption for data in transit, and optimizing endpoint configurations to handle high-concurrency cloud traffic. For details, see Communication and Data Security.
  • Operational integrity: To maintain maximum throughput and visibility, avoid “daisy-chaining” LBs (such as LB → Proxy → LB → Worker Node). Complex network chains often introduce “silent failures” where a middle hop remains active while the downstream Worker Node is down. If corporate policy requires an intermediate security gateway or egress proxy, you must validate that it acts as a transparent pass-through:
    • No session stickiness: Configure proxies to allow long-lived TCP flows (like Syslog) to redistribute naturally. This prevents “pinning” traffic to a single path, which would bypass your LB strategy and create overloaded “hot” Nodes.
    • Preserve TLS integrity: The proxy must support SNI (Server Name Indication) and preserve FQDN headers without modification. If the proxy performs “man-in-the-middle” inspection or incorrectly terminates TLS, the Worker Node might fail to identify the Source and reject the connection as unauthenticated.
    • Health and timeout alignment: Align the proxy timeout settings with the CVA Operational Guardrails requirement for stable listeners and consistent backpressure. If the proxy closes “idle” connections before Cribl does, it can cause constant, expensive TCP reconnections that degrade performance.