Hub-and-Spoke with Core Worker Group Overlay
In this overlay multiple spoke Worker Groups handle raw ingest and local filtering, while a single, central Core Worker Group receives that data for global processing.
This overlay is defined by clear roles for each component:
Spoke Worker Groups: These are typically region, function, or platform-specific (for example,
wg-us-ingest,wg-eu-ingest). Their role is to handle raw ingest, perform initial filtering, and execute basic routing logic close to the data source.Central Core Worker Group: This single, centralized group acts as the primary processing hub. It receives data from spoke Worker Groups via Worker Group to Worker Group bridging. Here, it performs the most intensive tasks, including normalization, enrichment, and governance. It also routes and sends (bifurcates) the processed data out to many final Destinations, such as multiple SIEMs, Cribl Lake, and observability tools.
Benefits
Strong central data governance: Provides a single place to define and enforce schemas, masking rules, and routing standards across the entire deployment.
Easier destination agility: Introducing new Destinations (for example, new SIEMs or data lakes) is simpler because the Core Worker Group handles all the final routing logic, requiring minimal or no changes to the spoke Worker Groups.
Clear separation of concerns: Spoke Worker Groups optimize for local ingest and basic controls, while the Core Worker Group optimizes for global consistency and routing.
Risks / Trade-offs
Critical path and bottleneck: The Core Worker Group acts as a critical path and potential throughput bottleneck, posing a large blast radius if it suffers an outage.
Latency impact: The necessary path involving additional Worker Group to Worker Group connections introduces extra latency. For use cases that require very low latency, this added time may be a significant trade-off.
Change management risk: Change management at the Core Worker Group requires careful governance to avoid unintentionally breaking multiple Destinations at once.
Design Notes (Mitigations)
Critical path mitigation: Scale the Core Worker Group horizontally and size it with additional headroom relative to spoke Worker Groups. Treat it as a top-tier, highly available shared service with strict SLOs (strong HA configuration, persistent queues, and robust load balancing).
Bypass core flows: Avoid routing all data through the Core Worker Group by default. If data streams do not require centralized governance (for example, local data sent to a regional Destination), allow direct routing from the spoke Worker Group to the Destination.
Structured routing and interoperability: Implement structured routing practices and use interoperable Packs and schemas. One example is tagging traffic destined for the Core Worker Group with
core_routed=true. This allows the Core Worker Group to evolve independently while maintaining stability.