Architecture Refactor: Move device state to registry, treat Proton as OLAP warehouse #639
Labels
No labels
1week
2weeks
Failed compliance check
IP cameras
NATS
Possible security concern
Review effort 1/5
Review effort 2/5
Review effort 3/5
Review effort 4/5
Review effort 5/5
UI
aardvark
accessibility
amd64
api
arm64
auth
back-end
bgp
blog
bug
build
checkers
ci-cd
cleanup
cnpg
codex
core
dependencies
device-management
documentation
duplicate
dusk
ebpf
enhancement
eta 1d
eta 1hr
eta 3d
eta 3hr
feature
fieldsurvey
github_actions
go
good first issue
help wanted
invalid
javascript
k8s
log-collector
mapper
mtr
needs-triage
netflow
network-sweep
observability
oracle
otel
plug-in
proton
python
question
reddit
redhat
research
rperf
rperf-checker
rust
sdk
security
serviceradar-agent
serviceradar-agent-gateway
serviceradar-web
serviceradar-web-ng
siem
snmp
sysmon
topology
ubiquiti
wasm
wontfix
zen-engine
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
carverauto/serviceradar#639
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Imported from GitHub.
Original GitHub issue: #1924
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1924
Original created: 2025-11-05T03:32:25Z
Problem Statement
ServiceRadar currently treats Proton (a stream processing database) as the primary source of truth for device state, causing performance issues that don't scale beyond tens of thousands of devices. While the tactical CTE query fix (#1921) reduced Proton CPU from 3986m to ~1000m, we're still fundamentally doing the wrong thing: hitting Proton for every device lookup, stats query, and inventory search.
Current issues:
count()queries on 50k devicesVision
Establish a proper layered data architecture:
Proton should only answer questions like:
Proton should not answer questions like:
Detailed Plan
See `newarch_plan.md` for comprehensive implementation details including:
Implementation Phases
Phase 1: Device Registry Service (Week 1-2)
Goal: Canonical in-memory device graph
Success: Registry hydrates from Proton, stays in sync with new updates
Phase 2: First-Class Collector Capabilities (Week 3-4)
Goal: Stop deriving capability from metadata
Success: Collector status from explicit records, not metadata inference
Phase 3: Stats Aggregator (Week 5)
Goal: Pre-aggregate dashboard metrics
Success: Dashboard loads in <10ms, no Proton queries for stats
Phase 4: Search Index (Week 6-7)
Goal: Fast inventory search without table scans
Success: Inventory search returns in <50ms for any query
Phase 5: Capability Matrix (Week 8-9)
Goal: Model Device ⇄ Service ⇄ Capability explicitly
Success: Can answer "when did device X last have successful ICMP?" without manual queries
Phase 6: Proton Boundary Enforcement (Week 10)
Goal: Ensure all state queries hit registry, not Proton
Success: Proton CPU <200m under normal load
Success Metrics
Performance Targets
Data Quality
Developer Experience
Rollback Plan
Each phase is independently deployable with feature flags:
```go
const (
UseRegistry = true // Phase 1
UseCapabilityIndex = true // Phase 2
UseStatsCache = true // Phase 3
UseSearchIndex = true // Phase 4
)
```
If any phase has issues, disable the flag and fall back to Proton queries (slower but functional).
Related Issues
Open Questions
References
85733a09` - Tactical CTE query fix65e5d947` - Architecture plan documentationImported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1924#issuecomment-3489467643
Original created: 2025-11-05T05:41:42Z
Phase 1 progress recap
DeviceRecordand in-memory store (pkg/registry/device.go,device_store.go) with ID/IP/MAC indexes and cache snapshot helpers.pkg/registry/hydrate.go,pkg/core/server.go).ProcessBatchDeviceUpdateskeeps the hot cache in sync on every update/tombstone and exposes cache-backed getters used by API/device manager code paths.pkg/core/api/server.go).pkg/registry/trigram_index.go,pkg/registry/registry.go).What’s left (next engineer hand-off)
web/src/components/Devices/DeviceList(and any inventory/search routes) should call the new registry search endpoint instead of fan-out SRQL queries. Surfacingmetrics_summary,alias_history, and collector capability blobs that the API now attaches will require mapping the new fields in the React data loader and updatingDeviceRowrenderers.score), display an inline badge for exact hostname/IP hits, and preserve the existing status filters.web/src/lib/api.tsto use the registry-backed/api/deviceslist/search endpoints.pkg/core/api/server.go) capturing query length, match count, and latency so we can validate the <50 ms target under load.db.GetUnifiedDevices...(e.g., identity lookup, mapper publisher) to rely onDeviceRegistry.SearchDevices/GetDeviceRecordto avoid Proton reads.pkg/core/server.go+pkg/core/api/server.goand note it indocs/docs/agents.md.Once those are in place, we can iterate on Phase 2 (capability index) with a warmed-up UI and telemetry to prove the search latency/success metrics.
Imported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1924#issuecomment-3489640042
Original created: 2025-11-05T06:52:40Z
Update after Phase 1/2 rollout:\n\n- Proton returned to ~4 cores consumed within 10 minutes of redeploy (NAME CPU(cores) MEMORY(bytes)
serviceradar-proton-654fbcbcbf-bqdxs 3993m 3018Mi ).\n- shows the dominant query is , issued once per minute by the SRQL/Proton OCaml client. Each run scans ~15.6M rows / 5.9GB, so the Observability dashboard still hammers Proton.\n- The CTE-based device lookups introduced in Phase 1 are still invoked hundreds of times per half hour (totalling ~3.7e8 rows read). They're better than the old pattern but remain an expensive fallback because SRQL routes keep hitting Proton instead of the registry cache.\n- There are still Code 210 exceptions for giant clauses generated from SRQL filters (e.g. 100+ IPs or Armis IDs), which cause retries and more table scans.\n\nTo address the remaining load we extended with Phase 3b (Critical Log Rollups) so the web dashboards consume a dedicated log digest instead of the raw scan, and we tightened Sprint 6 tasks to force SRQL/device lookups through the registry and search index. That should eliminate the hot queries once Phases 3-6 are complete.
Imported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1924#issuecomment-3489724548
Original created: 2025-11-05T07:26:04Z
Status update from the architecture refactor work:
• Log digest cache landed. Added pkg/core/log_digest.go with a capped ring buffer + 1h/24h counters, hydrated every 30s from Proton via the new DBLogDigestSource helper. Core start-up now wires the aggregator and keeps it refreshed until shutdown.
• New critical log APIs. Exposed
/api/logs/criticaland/api/logs/critical/counters(protected routes); the handlers serve the in-memory digest so fatal/error widgets no longer hit SRQL.• Frontend wired to cache. web/src/services/dataService.ts fetches the new endpoints and supplies CriticalLogsWidget with typed data + counters; accompanying unit coverage mocks the API responses.
• Plan/doc cleanup. Phase 3b items are checked off in newarch_plan.md to reflect the cache + API + UI work.
• Validation.
go test ./pkg/core/...andnpm run lintare green.Remaining for Phase 3b: stream-driven hydration (instead of snapshots) and feature-flag plumbing once we’re ready to roll this out broadly.
Imported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1924#issuecomment-3492120109
Original created: 2025-11-05T16:17:11Z
Phase 3b status update:\n- Landed the log digest aggregator, tailer, and persistence plumbing; feature flag () is now available in config.\n- Built/published ghcr.io/carverauto/serviceradar-core@sha256:4124f3f298f13c1d2425725bbca80c8bc2e902a93074e2e3849a24103b6e1be9 and rolled the demo deployment to that image.\n- During the rollout, enabling in the demo cluster prevented the HTTP listener from ever becoming ready (readiness probe stayed red). For now the flag is set to false in the runtime config so the new build can serve traffic.\n- Follow-up: debug why enabling the log digest stream blocks readiness before we flip the flag back on.
Imported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1924#issuecomment-3492120711
Original created: 2025-11-05T16:17:20Z
Phase 3b status update:
features.use_log_digest) is now exposed in config.UseLogDigestin the demo cluster prevented the HTTP listener from ever becoming ready (readiness probe stayed red). For now the flag is set to false in the runtime config so the new build can serve traffic.Imported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1924#issuecomment-3492222622
Original created: 2025-11-05T16:36:04Z
Re-enabled the log digest path in demo and rolled the cluster:
serviceradar-configto setfeatures.use_log_digest=true, then rebuilt/pushedghcr.io/carverauto/serviceradar-core(sha256:ab992d84af2ad9500ce0c4d37c2f7b3231eb76a145c267acdb0a205388c0bb9b, tagsha-057b69fdcc8cb45a3d1e46ffb395d910474d897a).serviceradar-core-c8cf58f59-dcgvbreached 1/1 ready in ~70s).Follow-up: Proton is rejecting the streaming tail with
code: 62 ... Syntax error ... EMIT CHANGES; the aggregator is retrying with exponential backoff. We’ll need to adjust the tail query so the digest keeps up-to-date once the flag stays on.Imported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1924#issuecomment-3492446609
Original created: 2025-11-05T17:20:38Z
Validated the streaming log tailer end-to-end:
ghcr.io/carverauto/serviceradar-core@sha256:c587c6cadf6b1e26182ae93641c42d75d236e93a3c0d76b41267140cee379355and rolled the demo core deployment.Phase3b log-digest test) to exercise the digest path./api/logs/criticaland/api/logs/critical/counterswith an admin JWT; the API served the new entry directly from the in-memory digest, confirming the stream keeps up without relapsing to Proton.EMIT CHANGESsyntax errors.Follow-up: none for Phase 3b tailer; next we can look at trimming that bootstrap timeout if it shows up in SLOs.
Imported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1924#issuecomment-3492559591
Original created: 2025-11-05T17:46:51Z
Follow-up cleanup from the streaming rollout:
COUNT(*)scan type in the service registry (uint64instead ofint), rebuilt/pushedghcr.io/carverauto/serviceradar-core@sha256:8170567691819242005bddd711f6c7635ed49b2f02ce66704ead70b8d210f278, and rolled the demo core deployment.converting UInt64 to *int is unsupported) are gone;/api/logs/criticalstill returns the latest fatal log from the digest stream.With the log digest tailer feeding cleanly and the poller cache check fixed, Phase 3b is fully green. Next up is only ongoing monitoring.
Imported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1924#issuecomment-3492579691
Original created: 2025-11-05T17:51:45Z
Proton connection pressure is cleared up:
sha256:85a9f7f4860f99b1ce0bd182a44880af4505c712f28a63e8c89eb1a60363c78aand rolledserviceradar-corein demo.proton: acquire conn timeouterrors during edge onboarding / poller cache refresh are no longer appearing after the redeploy; log tailer and registry operations now run without starving the pool.Remaining noisy log is the legacy poller DELETE syntax (tracked separately). Otherwise the new connection ceiling keeps the registry + onboarding flows happy.
Imported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1924#issuecomment-3493548590
Original created: 2025-11-05T21:24:17Z
Updates from today:
table(unified_devices)we now log both counts plus a sample of missing device_ids (pkg/registry/hydrate.go,pkg/registry/diagnostics.go,pkg/core/stats_aggregator.go). No mismatches yet; hydration is reporting 50,007 devices while Proton currently reports 50,009.ALTER STREAMinstead ofALTER TABLE) so the new diagnostics would not spam with Proton errors./api/statsfor its top-line device counts and only falls back to SRQL if the cache is empty. The tile is still bouncing between ~49.5k and 50k because Kong is rejecting internal SRQL calls with 401 (“Unauthorized”), so the fallback path only succeeds intermittently. That explains the eventual consistency we were seeing earlier.serviceradar-web→serviceradar-kongrather than the registry cache itself.Next steps:
/api/queryviaserviceradar-kong:8000is unauthorised and either fix the auth headers or point the internal client straight at the OCaml SRQL service.Imported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1924#issuecomment-3493900823
Original created: 2025-11-05T22:39:57Z
Observed another skew in the analytics "Total Devices" tile after today's core deploy. The value climbed to ~72k even though Proton and the registry both still report ~50k devices.
What we have already done:
pkg/core/stats_aggregator.goand rolled the new core image across the demo namespace./api/statsis live and the analytics dashboard queries it first, only falling back to SRQL when the cache turns up empty or zero.Current working theories:
0, and the SRQL query (in:devices time:last_7d stats:"count() as total") over-counts versioned rows.registry.SnapshotRecords()length vs. Proton again).Next actions before another roll-out:
/api/statsresponses alongside the fallback SRQL payload when the UI shows the inflated number (e.g. log both in the browser console or add telemetry indataService.fetchAllAnalyticsData)./api/statsis GA, or change the SRQL to respect_merged_into/_deletedso the count matches Proton.I updated
newarch_plan.mdto capture these investigations so we do not repeat the same fixes.