feat: add BGP/BMP collector #700
Labels
No labels
1week
2weeks
Failed compliance check
IP cameras
NATS
Possible security concern
Review effort 1/5
Review effort 2/5
Review effort 3/5
Review effort 4/5
Review effort 5/5
UI
aardvark
accessibility
amd64
api
arm64
auth
back-end
bgp
blog
bug
build
checkers
ci-cd
cleanup
cnpg
codex
core
dependencies
device-management
documentation
duplicate
dusk
ebpf
enhancement
eta 1d
eta 1hr
eta 3d
eta 3hr
feature
fieldsurvey
github_actions
go
good first issue
help wanted
invalid
javascript
k8s
log-collector
mapper
mtr
needs-triage
netflow
network-sweep
observability
oracle
otel
plug-in
proton
python
question
reddit
redhat
research
rperf
rperf-checker
rust
sdk
security
serviceradar-agent
serviceradar-agent-gateway
serviceradar-web
serviceradar-web-ng
siem
snmp
sysmon
topology
ubiquiti
wasm
wontfix
zen-engine
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
carverauto/serviceradar#700
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Imported from GitHub.
Original GitHub issue: #2183
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/2183
Original created: 2025-12-18T06:35:21Z
Is your feature request related to a problem?
We need to integrate a BMP (BGP) collector into ServiceRadar that can sit on the edge or in cluster,
receive BMP data, and write it to NATS JetStream. The interface to the message broker should be abstracted so we could support additional/ different message brokers in the future, right now we are currently targeting NATS JetStream but will be considering something like igy.rs or some kind of hybrid architecture in the future.
Once data is written to the message broker, we have several options of what to do next. Using the stateless rule-based zen-engine (serviceradar-zen), we can do really fast ETL to get it into the correct shape for an OCSF-based schema, write it to a different message subject, and then the
db-event-writerconsumer would process it off the queue and write it to the DB.Describe the solution you'd like
We have identified https://github.com/nxthdr/risotto as being a prime candidate and is currently MIT licensed, upstream maintainers have been contacted in the past and are open to PR. We would likely need to create a new trait for the messaging interface, similiar to https://github.com/carverauto/serviceradar/blob/staging/sr-architecture-and-design/prd/14-netflow-collector-rust.md
I think it makes more sense to try and do the ETL in our pipeline and not really in the rust BMP collector itself so that it stays generic or in whatever existing format it's already in and we can keep using the upstream version.
https://schema.ocsf.io/1.7.0/classes/network_activity
PRD:
https://github.com/carverauto/serviceradar/blob/staging/sr-architecture-and-design/prd/15-bmp-bgp-collector-rust.md
Future work would be around analysis/data processing and could involve @marvin-hansen and his causal computation library (https://github.com/deepcausality-rs)
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Related to https://github.com/carverauto/serviceradar/issues/859
Tangentally related to https://github.com/carverauto/serviceradar/issues/2181
Imported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/2183#issuecomment-3906229640
Original created: 2026-02-16T03:20:14Z
This PRD outlines the integration of BGP/BMP (BGP Monitoring Protocol) into the ServiceRadar ecosystem. By leveraging the Rust
risottocrate for ingestion and an Elixir Broadway consumer for the data pipeline, we bypass the complexity of Zen rules while gaining deep visibility into internal routing decisions (Calico, OSPF) and external ISP health.PRD: ServiceRadar BGP/BMP "Routing Intelligence" Engine
1. Vision & Purpose
To transform ServiceRadar from a status-monitor into a Decision-Monitor. By capturing the real-time routing conversations between the UDM Pro Max (FRR), K8s Clusters (Calico), and internal OSPF routers (
farm01/tonka01), we provide an absolute visual "Source of Truth" for how data travels through the infrastructure.2. Technical Architecture: The "Decision Pipeline"
2.1 The Ingestor (Rust + Risotto)
risottocrate.11019for BMP streams from the UDM Pro Max.events.bgp.rawsubject.2.2 The Pipeline (Elixir + Broadway)
events.bgp.raw.3. Cyber-Physical Routing Topology
3.1 The "K8s-to-Core" Visibility (Calico)
3.2 OSPF Redistribution Visibility
redistribute ospfinto the BGP process monitored by BMP.farm01andtonka01.farm01andtonka01fails, the BGP BMP stream will report the route withdrawal.4. Integration with the "Four Pillars"
Pillar 1: Apache Arrow
[Source_Node_ID, Destination_Node_ID, Prefix, Metric, Status].Pillar 2: Wasm-Arrow Bridge
farm01, Wasm calculates all active BGP/OSPF routes originating from that router in <1ms.Pillar 3: Deep Causality (The Brain)
risottoreports an ISP PeerDown and SNMP reportseth0is UP, the engine flags a Provider Routing Failure (Logic) rather than a Cable Failure (Physical).Pillar 4: Animated Particle Shaders
5. Critical Use Cases
5.1 Calico Pod Egress Verification
risottosees a Calico route withdrawal for Pod CIDR10.42.50.0/24.5.2 Internal Router Path Shift (Farm -> Tonka)
farm01totonka01.6. Success Metrics
show ip bgptable with 100% parity.7. Implementation Steps for Today
router ospfto redistribute intorouter bgp.risottocollector as a sidecar/container.bgp_routing_statehypertable in CNPG.deck.glArcLayer to subscribe to the BGP Arrow buffer.