PRD: Multi-KV configuration via Admin UI #643
Labels
No labels
1week
2weeks
Failed compliance check
IP cameras
NATS
Possible security concern
Review effort 1/5
Review effort 2/5
Review effort 3/5
Review effort 4/5
Review effort 5/5
UI
aardvark
accessibility
amd64
api
arm64
auth
back-end
bgp
blog
bug
build
checkers
ci-cd
cleanup
cnpg
codex
core
dependencies
device-management
documentation
duplicate
dusk
ebpf
enhancement
eta 1d
eta 1hr
eta 3d
eta 3hr
feature
fieldsurvey
github_actions
go
good first issue
help wanted
invalid
javascript
k8s
log-collector
mapper
mtr
needs-triage
netflow
network-sweep
observability
oracle
otel
plug-in
proton
python
question
reddit
redhat
research
rperf
rperf-checker
rust
sdk
security
serviceradar-agent
serviceradar-agent-gateway
serviceradar-web
serviceradar-web-ng
siem
snmp
sysmon
topology
ubiquiti
wasm
wontfix
zen-engine
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
carverauto/serviceradar#643
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Imported from GitHub.
Original GitHub issue: #1938
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1938
Original created: 2025-11-13T03:35:41Z
Background
Today pollers, agents, and checker services implicitly discover their KV endpoint from process environment variables (for example
KV_ADDRESS,KV_SEC_MODE, etc.). The Admin UI shows the config JSON stored in KV (e.g.,config/pollers/<id>.json), but those blobs rarely contain akv_address. That works for the current single-KV setup, yet it blocks operators who need to pin specific workloads to different KV backends (hub/leaf JetStream domains, segregated edge KV clusters, staging vs production transitions, etc.).Problem
Operators cannot use the UI or API to change the KV endpoint per workload. Any change requires SSHing into the host (or editing Compose/k8s manifests) to update environment variables, then restarting the process. This breaks the "watcher-driven reload" story we just completed and prevents multi-KV topologies (hub + one or more leaves) where a poller should follow the nearest KV.
Goals
Non-goals
User Stories
docker-pollerto the leaf KV while keeping core services on the hub KV, all within the Admin UI, without editing manifests.lab-kv:50057, verify changes via watchers, then revert.Requirements
pkg/poller.Config,pkg/agent.ServerConfig, checker configs, etc.) with akv_profileor similar reference that maps to the profile, while keepingkv_addressoverrides for backwards compatibility./api/admin/config/...) to accept/return the new fields and validate that referenced profiles exist.config/kv_profiles/<name>.json) or in the existing config store with watcher support, so edits replicate to all services.KVManager(or reconnect) without requiring a process restart.Technical Considerations
kv_profilecontinue reading env vars. When a profile is selected, the rendered configuration must include the concrete fields so legacy binaries still work.pkg/config,cmd/*) accordingly.kv_profilemetadata into generated configs.Open Questions
Milestones
docs/docs/kv-configuration.md,agents.md, release notes.Imported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/1938#issuecomment-3713824023
Original created: 2026-01-06T09:11:16Z
closing as will not implement