bug(dire): not de-duplicating agents #982
Labels
No labels
1week
2weeks
Failed compliance check
IP cameras
NATS
Possible security concern
Review effort 1/5
Review effort 2/5
Review effort 3/5
Review effort 4/5
Review effort 5/5
UI
aardvark
accessibility
amd64
api
arm64
auth
back-end
bgp
blog
bug
build
checkers
ci-cd
cleanup
cnpg
codex
core
dependencies
device-management
documentation
duplicate
dusk
ebpf
enhancement
eta 1d
eta 1hr
eta 3d
eta 3hr
feature
fieldsurvey
github_actions
go
good first issue
help wanted
invalid
javascript
k8s
log-collector
mapper
mtr
needs-triage
netflow
network-sweep
observability
oracle
otel
plug-in
proton
python
question
reddit
redhat
research
rperf
rperf-checker
rust
sdk
security
serviceradar-agent
serviceradar-agent-gateway
serviceradar-web
serviceradar-web-ng
siem
snmp
sysmon
topology
ubiquiti
wasm
wontfix
zen-engine
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
carverauto/serviceradar#982
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Imported from GitHub.
Original GitHub issue: #2758
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/2758
Original created: 2026-02-10T00:06:57Z
Describe the bug
Our agents are getting turned into new device records everytime their k8s pod IP address changes. We should be able to easily reconcile device records from agents since they provide a strong identifier, the unique agent name or ID. I'm not sure if the agent ID stuff actually exists but we may want to consider setting one when we onboard an agent or first see an agent, not sure how it would persist on the agent side, we could embed it in the onboarding token but what about agents that have already been onboarded? Not sure where the agent would keep this either and what would happen if the end-user re-installed the agent or wiped the server?
We only have two agents in the system, one running in k8s in the cluster. (k8s-agent) and another agent-dusk running on 192.168.2.22
Instead we're showing 3 agents running in here, two with 10 net addressing obviously indicating they are coming from our k8s cluster.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Smartphone (please complete the following information):
Additional context
Add any other context about the problem here.
Imported GitHub comment.
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/2758#issuecomment-3875227293
Original created: 2026-02-10T04:18:16Z
closing, fixed