Shallow copies of HostResult in summary collection/stream cause data races #686

Closed
opened 2026-03-28 04:27:25 +00:00 by mfreeman451 · 1 comment
Owner

Imported from GitHub.

Original GitHub issue: #2148
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/2148
Original created: 2025-12-16T05:17:19Z


Summary

  • Context: The collectShardSummaries and processShardForSummary functions in base_processor.go collect host information from sharded maps to return aggregated sweep summaries to callers.
  • Bug: Both functions perform shallow copies of HostResult structs using *host, which copies only the struct fields but shares the underlying data of pointer and slice fields (PortResults, PortMap, ICMPStatus).
  • Actual vs. expected: After the shard locks are released, the copied HostResult structs continue to reference the same underlying slices and maps as the originals, allowing concurrent read/write access. The expected behavior is that the returned data should be independent copies that can be safely accessed without holding the shard lock.
  • Impact: This causes data races when the original host data is modified (e.g., new ports added) while the copied summary data is being accessed concurrently, leading to undefined behavior, potential crashes, or data corruption.

Code with bug

Bug in collectShardSummaries (line 442)

// Copy host data
for _, host := range shard.hostMap {
	if host.Available {
		summary.availableHosts++
	}

	if host.ICMPStatus != nil && host.ICMPStatus.Available {
		summary.icmpHosts++
	}

	summary.hosts = append(summary.hosts, *host) // <-- BUG 🔴 Shallow copy shares PortResults, PortMap, ICMPStatus
}

Bug in processShardForSummary (line 792)

// Stream host data and count
for _, host := range shard.hostMap {
	select {
	case hostCh <- *host: // <-- BUG 🔴 Shallow copy shares PortResults, PortMap, ICMPStatus
		if host.Available {
			result.availableHosts++
		}

		if host.ICMPStatus != nil && host.ICMPStatus.Available {
			result.icmpHosts++
		}

		result.totalHosts++
	case <-ctx.Done():
		return result, false
	}
}

Evidence

Failing test

Test script

package sweeper

import (
	"context"
	"sync"
	"testing"
	"time"

	"github.com/carverauto/serviceradar/pkg/logger"
	"github.com/carverauto/serviceradar/pkg/models"
	"github.com/stretchr/testify/require"
)

// TestShallowCopyDataRace demonstrates the data race bug in collectShardSummaries
// Run with: go test -race -run TestShallowCopyDataRace
func TestShallowCopyDataRace(t *testing.T) {
	config := &models.Config{
		Ports: []int{80, 443, 8080},
	}

	processor := NewBaseProcessor(config, logger.NewTestLogger())

	// Add initial host with one port
	result := &models.Result{
		Target: models.Target{
			Host: "192.168.1.1",
			Port: 80,
			Mode: models.ModeTCP,
		},
		Available: true,
		RespTime:  time.Millisecond * 10,
	}

	err := processor.Process(result)
	require.NoError(t, err)

	var wg sync.WaitGroup
	wg.Add(2)

	// Goroutine 1: Get summary which performs shallow copy
	go func() {
		defer wg.Done()

		ctx := context.Background()
		summary, err := processor.GetSummary(ctx)
		require.NoError(t, err)

		// After GetSummary returns, the RLock is released
		// Sleep to increase likelihood of concurrent access
		time.Sleep(10 * time.Millisecond)

		// Access the copied data
		// If shallow copy, this will race with modifications in goroutine 2
		if len(summary.Hosts) > 0 {
			host := summary.Hosts[0]
			// Access shared slice
			if len(host.PortResults) > 0 {
				_ = host.PortResults[0].Port
			}
			// Access shared map
			if host.PortMap != nil {
				_ = host.PortMap[80]
			}
		}
	}()

	// Goroutine 2: Modify the host by adding more ports
	go func() {
		defer wg.Done()

		// Small delay to let GetSummary start
		time.Sleep(5 * time.Millisecond)

		// Add more ports - this will modify the PortResults slice and PortMap
		for _, port := range []int{443, 8080} {
			result := &models.Result{
				Target: models.Target{
					Host: "192.168.1.1",
					Port: port,
					Mode: models.ModeTCP,
				},
				Available: true,
				RespTime:  time.Millisecond * 15,
			}

			err := processor.Process(result)
			require.NoError(t, err)
		}
	}()

	wg.Wait()
}

Test output

=== RUN   TestShallowCopyDataRace
==================
WARNING: DATA RACE
Read at 0x00c0001f5740 by goroutine 10:
  runtime.mapaccess1_fast64()
      /root/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.0.linux-amd64/src/internal/runtime/maps/runtime_fast64_swiss.go:17 +0x0
  github.com/carverauto/serviceradar/pkg/sweeper.TestShallowCopyDataRace.func1()
      /home/user/serviceradar/pkg/sweeper/shallow_copy_race_test.go:62 +0x1cc

Previous write at 0x00c0001f5740 by goroutine 11:
  runtime.mapassign_fast64()
      /root/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.0.linux-amd64/src/internal/runtime/maps/runtime_fast64_swiss.go:195 +0x0
  github.com/carverauto/serviceradar/pkg/sweeper.(*BaseProcessor).updatePortStatus()
      /home/user/serviceradar/pkg/sweeper/base_processor.go:366 +0x6fb
  github.com/carverauto/serviceradar/pkg/sweeper.(*BaseProcessor).processTCPResult()
      /home/user/serviceradar/pkg/sweeper/base_processor.go:314 +0x3cb
  github.com/carverauto/serviceradar/pkg/sweeper.(*BaseProcessor).Process()
      /home/user/serviceradar/pkg/sweeper/base_processor.go:303 +0x397
  github.com/carverauto/serviceradar/pkg/sweeper.TestShallowCopyDataRace.func2()
      /home/user/serviceradar/pkg/sweeper/shallow_copy_race_test.go:86 +0x225

Goroutine 10 (running) created at:
  github.com/carverauto/serviceradar/pkg/sweeper.TestShallowCopyDataRace()
      /home/user/serviceradar/pkg/sweeper/shallow_copy_race_test.go:41 +0x54a
  testing.tRunner()
      /root/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.0.linux-amd64/src/testing/testing.go:1934 +0x21c
  testing.(*T).Run.gowrap1()
      /root/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.0.linux-amd64/src/testing/testing.go:1997 +0x44

Goroutine 11 (finished) created at:
  github.com/carverauto/serviceradar/pkg/sweeper.TestShallowCopyDataRace()
      /home/user/serviceradar/pkg/sweeper/shallow_copy_race_test.go:68 +0x654
  testing.tRunner()
      /root/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.0.linux-amd64/src/testing/testing.go:1934 +0x21c
  testing.(*T).Run.gowrap1()
      /root/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.0.linux-amd64/src/testing/testing.go:1997 +0x44
==================
    testing.go:1617: race detected during execution of test
--- FAIL: TestShallowCopyDataRace (0.01s)
FAIL
FAIL	github.com/carverauto/serviceradar/pkg/sweeper	0.037s
FAIL

Example

Consider this scenario:

  1. Initial state: Host "192.168.1.1" exists in shard 5 with one open port (80). The HostResult contains:

    • PortResults: slice with 1 element pointing to PortResult{Port: 80}
    • PortMap: map with entry 80 -> PortResult{Port: 80}
  2. Thread A calls GetSummary():

    • Acquires shard.mu.RLock() at line 418
    • Iterates through hosts and performs summary.hosts = append(summary.hosts, *host) at line 442
    • This creates a new HostResult struct, but PortResults and PortMap still point to the same underlying data
    • Releases shard.mu.RUnlock() at line 419
    • Returns the summary to the caller
  3. Thread A continues to use the summary (lock now released):

    • Accesses summary.Hosts[0].PortMap[80] to read port information
    • This reads from the shared map
  4. Thread B calls Process() for port 443 on the same host:

    • Acquires shard.mu.Lock() at line 281
    • Calls updatePortStatus() at line 314
    • Executes host.PortMap[result.Target.Port] = portResult at line 366
    • This writes to the same map that Thread A is reading
  5. Data race: Thread A reads from PortMap while Thread B writes to it, without any synchronization.

Full context

The BaseProcessor in pkg/sweeper/base_processor.go is a concurrent data structure that maintains network sweep results across multiple sharded maps. Each shard protects its host data with a read-write mutex (sync.RWMutex).

The processor provides two methods to retrieve aggregated sweep summaries:

  1. GetSummary() (line 509) - collects all host results into memory and returns them
  2. GetSummaryStream() (line 838) - streams host results through a channel to avoid large memory allocations

Both methods call internal functions that copy host data from shards:

  • collectShardSummaries() (line 405) is called by GetSummary()
  • processShardForSummary() (line 769) is called by GetSummaryStream()

The issue is that these functions acquire read locks, copy the data, and then release the locks before the copied data is used by callers. The shallow copy means the returned data structures still reference the same underlying memory that can be modified by concurrent Process() calls.

The HostResult struct (defined in pkg/models/sweep.go) contains several fields that require deep copying:

type HostResult struct {
	Host         string
	Available    bool
	FirstSeen    time.Time
	LastSeen     time.Time
	PortResults  []*PortResult       // <-- Slice of pointers (shared)
	PortMap      map[int]*PortResult // <-- Map with pointers (shared)
	ICMPStatus   *ICMPStatus         // <-- Pointer (shared)
	ResponseTime time.Duration
}

When new ports are discovered for a host, updatePortStatus() (line 318) modifies both PortResults and PortMap:

  • Line 365: host.PortResults = append(host.PortResults, portResult)
  • Line 366: host.PortMap[result.Target.Port] = portResult

These modifications can happen concurrently with reads from the "copied" summaries, creating data races.

The same pattern exists in two additional locations in pkg/sweeper/memory_store.go:

  • Line 408 in convertToSlice()
  • Line 511 (location not verified but mentioned in grep output)

Why has this bug gone undetected?

This data race has likely gone undetected for several reasons:

  1. Timing-dependent: The race window is very small - it only occurs if a summary is being accessed after its shard lock is released AND a concurrent Process() call modifies the same host data. In typical usage patterns, summaries might be consumed quickly or the specific timing doesn't align.

  2. Go's memory model: Even with the race condition, Go's runtime may not crash immediately. The behavior is undefined, but often "appears to work" - especially on systems with strong memory ordering (like x86-64). The corruption might manifest as stale data rather than crashes.

  3. Not detected without race detector: The bug is only reliably detectable with Go's race detector (go test -race). Standard testing without the race detector will typically pass, as the undefined behavior may not cause obvious failures.

  4. Read-heavy workload: If the typical workload involves more summary reads than concurrent modifications, or if modifications happen in between summary retrievals rather than during, the race is less likely to manifest.

  5. Slice/map implementation details: Modern Go runtime implementations of slices and maps have some internal safety mechanisms that might mask corruption in certain scenarios, though they don't guarantee race-free access.

  6. Testing patterns: The existing test suite may not have concurrent scenarios that exercise both summary retrieval and host modification simultaneously. The tests in base_processor_test.go often use defer processor.cleanup() which suggests they test one operation at a time rather than true concurrent access patterns.

Recommended fix

The fix requires performing deep copies of the HostResult data structures. Here's the recommended approach:

Create a helper function to deep copy a HostResult:

// deepCopyHostResult creates a deep copy of a HostResult
func deepCopyHostResult(src *models.HostResult) models.HostResult { // <-- FIX 🟢
	dst := models.HostResult{
		Host:         src.Host,
		Available:    src.Available,
		FirstSeen:    src.FirstSeen,
		LastSeen:     src.LastSeen,
		ResponseTime: src.ResponseTime,
	}

	// Deep copy PortResults slice
	if src.PortResults != nil {
		dst.PortResults = make([]*models.PortResult, len(src.PortResults))
		for i, pr := range src.PortResults {
			dst.PortResults[i] = &models.PortResult{ // <-- FIX 🟢
				Port:      pr.Port,
				Available: pr.Available,
				RespTime:  pr.RespTime,
				Service:   pr.Service,
			}
		}
	}

	// Deep copy PortMap
	if src.PortMap != nil {
		dst.PortMap = make(map[int]*models.PortResult, len(src.PortMap))
		for port, pr := range src.PortMap {
			dst.PortMap[port] = &models.PortResult{
				Port:      pr.Port,
				Available: pr.Available,
				RespTime:  pr.RespTime,
				Service:   pr.Service,
			}
		}
	}

	// Deep copy ICMPStatus
	if src.ICMPStatus != nil {
		dst.ICMPStatus = &models.ICMPStatus{
			Available:  src.ICMPStatus.Available,
			RoundTrip:  src.ICMPStatus.RoundTrip,
			PacketLoss: src.ICMPStatus.PacketLoss,
		}
	}

	return dst
}

Then replace the shallow copy operations:

  • Line 442: Change summary.hosts = append(summary.hosts, *host) to summary.hosts = append(summary.hosts, deepCopyHostResult(host))
  • Line 792: Change hostCh <- *host to hostCh <- deepCopyHostResult(host)

The same fix should be applied to pkg/sweeper/memory_store.go at lines 408 and 511.

Related bugs

The same shallow copy pattern exists in pkg/sweeper/memory_store.go:

  • Line 408 in convertToSlice() function
  • Potentially line 511 (mentioned in grep output)

These locations should be reviewed and fixed using the same deep copy approach.

Imported from GitHub. Original GitHub issue: #2148 Original author: @mfreeman451 Original URL: https://github.com/carverauto/serviceradar/issues/2148 Original created: 2025-12-16T05:17:19Z --- # Summary - **Context**: The `collectShardSummaries` and `processShardForSummary` functions in `base_processor.go` collect host information from sharded maps to return aggregated sweep summaries to callers. - **Bug**: Both functions perform shallow copies of `HostResult` structs using `*host`, which copies only the struct fields but shares the underlying data of pointer and slice fields (`PortResults`, `PortMap`, `ICMPStatus`). - **Actual vs. expected**: After the shard locks are released, the copied `HostResult` structs continue to reference the same underlying slices and maps as the originals, allowing concurrent read/write access. The expected behavior is that the returned data should be independent copies that can be safely accessed without holding the shard lock. - **Impact**: This causes data races when the original host data is modified (e.g., new ports added) while the copied summary data is being accessed concurrently, leading to undefined behavior, potential crashes, or data corruption. # Code with bug ## Bug in collectShardSummaries (line 442) ```go // Copy host data for _, host := range shard.hostMap { if host.Available { summary.availableHosts++ } if host.ICMPStatus != nil && host.ICMPStatus.Available { summary.icmpHosts++ } summary.hosts = append(summary.hosts, *host) // <-- BUG 🔴 Shallow copy shares PortResults, PortMap, ICMPStatus } ``` ## Bug in processShardForSummary (line 792) ```go // Stream host data and count for _, host := range shard.hostMap { select { case hostCh <- *host: // <-- BUG 🔴 Shallow copy shares PortResults, PortMap, ICMPStatus if host.Available { result.availableHosts++ } if host.ICMPStatus != nil && host.ICMPStatus.Available { result.icmpHosts++ } result.totalHosts++ case <-ctx.Done(): return result, false } } ``` # Evidence ## Failing test ### Test script ```go package sweeper import ( "context" "sync" "testing" "time" "github.com/carverauto/serviceradar/pkg/logger" "github.com/carverauto/serviceradar/pkg/models" "github.com/stretchr/testify/require" ) // TestShallowCopyDataRace demonstrates the data race bug in collectShardSummaries // Run with: go test -race -run TestShallowCopyDataRace func TestShallowCopyDataRace(t *testing.T) { config := &models.Config{ Ports: []int{80, 443, 8080}, } processor := NewBaseProcessor(config, logger.NewTestLogger()) // Add initial host with one port result := &models.Result{ Target: models.Target{ Host: "192.168.1.1", Port: 80, Mode: models.ModeTCP, }, Available: true, RespTime: time.Millisecond * 10, } err := processor.Process(result) require.NoError(t, err) var wg sync.WaitGroup wg.Add(2) // Goroutine 1: Get summary which performs shallow copy go func() { defer wg.Done() ctx := context.Background() summary, err := processor.GetSummary(ctx) require.NoError(t, err) // After GetSummary returns, the RLock is released // Sleep to increase likelihood of concurrent access time.Sleep(10 * time.Millisecond) // Access the copied data // If shallow copy, this will race with modifications in goroutine 2 if len(summary.Hosts) > 0 { host := summary.Hosts[0] // Access shared slice if len(host.PortResults) > 0 { _ = host.PortResults[0].Port } // Access shared map if host.PortMap != nil { _ = host.PortMap[80] } } }() // Goroutine 2: Modify the host by adding more ports go func() { defer wg.Done() // Small delay to let GetSummary start time.Sleep(5 * time.Millisecond) // Add more ports - this will modify the PortResults slice and PortMap for _, port := range []int{443, 8080} { result := &models.Result{ Target: models.Target{ Host: "192.168.1.1", Port: port, Mode: models.ModeTCP, }, Available: true, RespTime: time.Millisecond * 15, } err := processor.Process(result) require.NoError(t, err) } }() wg.Wait() } ``` ### Test output ``` === RUN TestShallowCopyDataRace ================== WARNING: DATA RACE Read at 0x00c0001f5740 by goroutine 10: runtime.mapaccess1_fast64() /root/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.0.linux-amd64/src/internal/runtime/maps/runtime_fast64_swiss.go:17 +0x0 github.com/carverauto/serviceradar/pkg/sweeper.TestShallowCopyDataRace.func1() /home/user/serviceradar/pkg/sweeper/shallow_copy_race_test.go:62 +0x1cc Previous write at 0x00c0001f5740 by goroutine 11: runtime.mapassign_fast64() /root/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.0.linux-amd64/src/internal/runtime/maps/runtime_fast64_swiss.go:195 +0x0 github.com/carverauto/serviceradar/pkg/sweeper.(*BaseProcessor).updatePortStatus() /home/user/serviceradar/pkg/sweeper/base_processor.go:366 +0x6fb github.com/carverauto/serviceradar/pkg/sweeper.(*BaseProcessor).processTCPResult() /home/user/serviceradar/pkg/sweeper/base_processor.go:314 +0x3cb github.com/carverauto/serviceradar/pkg/sweeper.(*BaseProcessor).Process() /home/user/serviceradar/pkg/sweeper/base_processor.go:303 +0x397 github.com/carverauto/serviceradar/pkg/sweeper.TestShallowCopyDataRace.func2() /home/user/serviceradar/pkg/sweeper/shallow_copy_race_test.go:86 +0x225 Goroutine 10 (running) created at: github.com/carverauto/serviceradar/pkg/sweeper.TestShallowCopyDataRace() /home/user/serviceradar/pkg/sweeper/shallow_copy_race_test.go:41 +0x54a testing.tRunner() /root/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.0.linux-amd64/src/testing/testing.go:1934 +0x21c testing.(*T).Run.gowrap1() /root/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.0.linux-amd64/src/testing/testing.go:1997 +0x44 Goroutine 11 (finished) created at: github.com/carverauto/serviceradar/pkg/sweeper.TestShallowCopyDataRace() /home/user/serviceradar/pkg/sweeper/shallow_copy_race_test.go:68 +0x654 testing.tRunner() /root/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.0.linux-amd64/src/testing/testing.go:1934 +0x21c testing.(*T).Run.gowrap1() /root/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.25.0.linux-amd64/src/testing/testing.go:1997 +0x44 ================== testing.go:1617: race detected during execution of test --- FAIL: TestShallowCopyDataRace (0.01s) FAIL FAIL github.com/carverauto/serviceradar/pkg/sweeper 0.037s FAIL ``` ## Example Consider this scenario: 1. **Initial state**: Host "192.168.1.1" exists in shard 5 with one open port (80). The `HostResult` contains: - `PortResults`: slice with 1 element pointing to PortResult{Port: 80} - `PortMap`: map with entry `80 -> PortResult{Port: 80}` 2. **Thread A calls GetSummary()**: - Acquires `shard.mu.RLock()` at line 418 - Iterates through hosts and performs `summary.hosts = append(summary.hosts, *host)` at line 442 - This creates a new `HostResult` struct, but `PortResults` and `PortMap` still point to the same underlying data - Releases `shard.mu.RUnlock()` at line 419 - Returns the summary to the caller 3. **Thread A continues to use the summary** (lock now released): - Accesses `summary.Hosts[0].PortMap[80]` to read port information - This reads from the shared map 4. **Thread B calls Process() for port 443 on the same host**: - Acquires `shard.mu.Lock()` at line 281 - Calls `updatePortStatus()` at line 314 - Executes `host.PortMap[result.Target.Port] = portResult` at line 366 - This writes to the same map that Thread A is reading 5. **Data race**: Thread A reads from `PortMap` while Thread B writes to it, without any synchronization. # Full context The `BaseProcessor` in `pkg/sweeper/base_processor.go` is a concurrent data structure that maintains network sweep results across multiple sharded maps. Each shard protects its host data with a read-write mutex (`sync.RWMutex`). The processor provides two methods to retrieve aggregated sweep summaries: 1. `GetSummary()` (line 509) - collects all host results into memory and returns them 2. `GetSummaryStream()` (line 838) - streams host results through a channel to avoid large memory allocations Both methods call internal functions that copy host data from shards: - `collectShardSummaries()` (line 405) is called by `GetSummary()` - `processShardForSummary()` (line 769) is called by `GetSummaryStream()` The issue is that these functions acquire read locks, copy the data, and then release the locks before the copied data is used by callers. The shallow copy means the returned data structures still reference the same underlying memory that can be modified by concurrent `Process()` calls. The `HostResult` struct (defined in `pkg/models/sweep.go`) contains several fields that require deep copying: ```go type HostResult struct { Host string Available bool FirstSeen time.Time LastSeen time.Time PortResults []*PortResult // <-- Slice of pointers (shared) PortMap map[int]*PortResult // <-- Map with pointers (shared) ICMPStatus *ICMPStatus // <-- Pointer (shared) ResponseTime time.Duration } ``` When new ports are discovered for a host, `updatePortStatus()` (line 318) modifies both `PortResults` and `PortMap`: - Line 365: `host.PortResults = append(host.PortResults, portResult)` - Line 366: `host.PortMap[result.Target.Port] = portResult` These modifications can happen concurrently with reads from the "copied" summaries, creating data races. The same pattern exists in two additional locations in `pkg/sweeper/memory_store.go`: - Line 408 in `convertToSlice()` - Line 511 (location not verified but mentioned in grep output) # Why has this bug gone undetected? This data race has likely gone undetected for several reasons: 1. **Timing-dependent**: The race window is very small - it only occurs if a summary is being accessed after its shard lock is released AND a concurrent `Process()` call modifies the same host data. In typical usage patterns, summaries might be consumed quickly or the specific timing doesn't align. 2. **Go's memory model**: Even with the race condition, Go's runtime may not crash immediately. The behavior is undefined, but often "appears to work" - especially on systems with strong memory ordering (like x86-64). The corruption might manifest as stale data rather than crashes. 3. **Not detected without race detector**: The bug is only reliably detectable with Go's race detector (`go test -race`). Standard testing without the race detector will typically pass, as the undefined behavior may not cause obvious failures. 4. **Read-heavy workload**: If the typical workload involves more summary reads than concurrent modifications, or if modifications happen in between summary retrievals rather than during, the race is less likely to manifest. 5. **Slice/map implementation details**: Modern Go runtime implementations of slices and maps have some internal safety mechanisms that might mask corruption in certain scenarios, though they don't guarantee race-free access. 6. **Testing patterns**: The existing test suite may not have concurrent scenarios that exercise both summary retrieval and host modification simultaneously. The tests in `base_processor_test.go` often use `defer processor.cleanup()` which suggests they test one operation at a time rather than true concurrent access patterns. # Recommended fix The fix requires performing deep copies of the `HostResult` data structures. Here's the recommended approach: Create a helper function to deep copy a `HostResult`: ```go // deepCopyHostResult creates a deep copy of a HostResult func deepCopyHostResult(src *models.HostResult) models.HostResult { // <-- FIX 🟢 dst := models.HostResult{ Host: src.Host, Available: src.Available, FirstSeen: src.FirstSeen, LastSeen: src.LastSeen, ResponseTime: src.ResponseTime, } // Deep copy PortResults slice if src.PortResults != nil { dst.PortResults = make([]*models.PortResult, len(src.PortResults)) for i, pr := range src.PortResults { dst.PortResults[i] = &models.PortResult{ // <-- FIX 🟢 Port: pr.Port, Available: pr.Available, RespTime: pr.RespTime, Service: pr.Service, } } } // Deep copy PortMap if src.PortMap != nil { dst.PortMap = make(map[int]*models.PortResult, len(src.PortMap)) for port, pr := range src.PortMap { dst.PortMap[port] = &models.PortResult{ Port: pr.Port, Available: pr.Available, RespTime: pr.RespTime, Service: pr.Service, } } } // Deep copy ICMPStatus if src.ICMPStatus != nil { dst.ICMPStatus = &models.ICMPStatus{ Available: src.ICMPStatus.Available, RoundTrip: src.ICMPStatus.RoundTrip, PacketLoss: src.ICMPStatus.PacketLoss, } } return dst } ``` Then replace the shallow copy operations: - Line 442: Change `summary.hosts = append(summary.hosts, *host)` to `summary.hosts = append(summary.hosts, deepCopyHostResult(host))` - Line 792: Change `hostCh <- *host` to `hostCh <- deepCopyHostResult(host)` The same fix should be applied to `pkg/sweeper/memory_store.go` at lines 408 and 511. # Related bugs The same shallow copy pattern exists in `pkg/sweeper/memory_store.go`: - Line 408 in `convertToSlice()` function - Potentially line 511 (mentioned in grep output) These locations should be reviewed and fixed using the same deep copy approach.
Author
Owner

Imported GitHub comment.

Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/issues/2148#issuecomment-3662134269
Original created: 2025-12-16T19:51:47Z


closing as completed

Imported GitHub comment. Original author: @mfreeman451 Original URL: https://github.com/carverauto/serviceradar/issues/2148#issuecomment-3662134269 Original created: 2025-12-16T19:51:47Z --- closing as completed
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
carverauto/serviceradar#686
No description provided.