poller update missed in last check in #2317
No reviewers
Labels
No labels
1week
2weeks
Failed compliance check
IP cameras
NATS
Possible security concern
Review effort 1/5
Review effort 2/5
Review effort 3/5
Review effort 4/5
Review effort 5/5
UI
aardvark
accessibility
amd64
api
arm64
auth
back-end
bgp
blog
bug
build
checkers
ci-cd
cleanup
cnpg
codex
core
dependencies
device-management
documentation
duplicate
dusk
ebpf
enhancement
eta 1d
eta 1hr
eta 3d
eta 3hr
feature
fieldsurvey
github_actions
go
good first issue
help wanted
invalid
javascript
k8s
log-collector
mapper
mtr
needs-triage
netflow
network-sweep
observability
oracle
otel
plug-in
proton
python
question
reddit
redhat
research
rperf
rperf-checker
rust
sdk
security
serviceradar-agent
serviceradar-agent-gateway
serviceradar-web
serviceradar-web-ng
siem
snmp
sysmon
topology
ubiquiti
wasm
wontfix
zen-engine
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
carverauto/serviceradar!2317
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "refs/pull/2317/head"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Imported from GitHub pull request.
Original GitHub pull request: #1757
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/pull/1757
Original created: 2025-10-14T03:15:15Z
Original updated: 2025-10-14T03:36:48Z
Original head: carverauto/serviceradar:sysmonvm/integration_tests_2
Original base: main
Original merged: 2025-10-14T03:36:45Z by @mfreeman451
PR Type
Bug fix, Enhancement
Description
Fixed missing client assignment after
connectToCorecall in initializationRefactored
connectToCoreto return clients instead of setting them directlyImproved thread safety in reconnection logic with proper mutex handling
Enhanced error handling during client closure operations
Diagram Walkthrough
File Walkthrough
poller.go
Fix client assignment and improve connection thread safetypkg/poller/poller.go
connectToCoreto return client instances instead ofdirectly assigning to struct fields
Newfunction afterconnectToCorecall
reconnectCoreto prevent deadlocks and raceconditions
UpdateConfigto properly manage client lifecycle duringreconnection
Imported GitHub PR comment.
Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/1757#issuecomment-3399954606
Original created: 2025-10-14T03:15:46Z
PR Compliance Guide 🔍
Below is a summary of compliance checks for this PR:
Potential deadlock
Description: Closing the existing gRPC client while holding the mutex may block other goroutines that
need the lock if Close() internally performs blocking operations or callbacks, potentially
leading to a lock convoy or deadlock under high load.
poller.go [487-494]
Referred Code
🎫 No ticket provided
Codebase context is not defined
Follow the guide to enable codebase context checks.
No custom compliance provided
Follow the guide to enable custom compliance check.
Compliance status legend
🟢 - Fully Compliant🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label
Imported GitHub PR comment.
Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/1757#issuecomment-3399956226
Original created: 2025-10-14T03:16:53Z
PR Code Suggestions ✨
Explore these optional code suggestions:
✅
Prevent race condition during reconnectionSuggestion Impact:
The commit refactored reconnect and shutdown paths to avoid closing the existing client while holding the lock. It now captures the old client under lock, releases the lock, closes it, and only then proceeds, reducing race conditions. Although the commit still connects after releasing the lock (not before locking as suggested), it implements the core idea of not closing under lock and atomically swapping clients.code diff:
To prevent a race condition in
reconnectCore, first establish the new gRPCconnection, then acquire the lock to atomically swap the old client with the new
one.
pkg/poller/poller.go [483-510]
[To ensure code accuracy, apply this suggestion manually]Suggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies a critical race condition introduced in the PR that could lead to a nil pointer dereference and a panic, and it proposes a robust fix.
✅
Fix race condition on config updateSuggestion Impact:
The commit changes the reconnect logic to avoid closing the existing gRPC client while holding the lock; it now grabs the client under lock, releases the lock, closes it outside the lock, and then proceeds to connect and later set the new client. This addresses the race condition the suggestion targeted, though it does not connect before locking as suggested.code diff:
To fix a race condition in
UpdateConfig, first establish the new gRPCconnection, then acquire the lock to atomically replace the old client with the
new one.
pkg/poller/poller.go [628-649]
[Suggestion processed]Suggestion importance[1-10]: 9
__
Why: The suggestion correctly identifies a critical race condition in
UpdateConfigthat was introduced in the PR, which could cause a panic, and provides the correct pattern to fix it.