Update/troubleshooting sysmonvm #2318

Merged
mfreeman451 merged 9 commits from refs/pull/2318/head into main 2025-10-14 06:28:41 +00:00
mfreeman451 commented 2025-10-14 03:37:27 +00:00 (Migrated from github.com)
Owner

Imported from GitHub pull request.

Original GitHub pull request: #1758
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/pull/1758
Original created: 2025-10-14T03:37:27Z
Original updated: 2025-10-14T06:28:44Z
Original head: carverauto/serviceradar:update/troubleshooting_sysmonvm
Original base: main
Original merged: 2025-10-14T06:28:41Z by @mfreeman451

PR Type

Enhancement


Description

  • Enhanced CPU metrics with cluster-level aggregation and core labeling

  • Added caching mechanism for hostfreq data collection with 2-second TTL

  • Improved data validation and error handling for frequency measurements

  • Fixed indentation and formatting issues in service initialization


Diagram Walkthrough

flowchart LR
  collector["CPU Frequency Collector"] --> hostfreq["hostfreq Darwin"]
  hostfreq --> cache["Cache Layer (2s TTL)"]
  cache --> snapshot["Snapshot with Clusters"]
  snapshot --> service["Sysmon Service"]
  service --> metrics["Enhanced CPU Metrics"]

File Walkthrough

Relevant files
Enhancement
service.go
Enhanced CPU metrics with cluster support                               

pkg/checker/sysmonvm/service.go

  • Added Label and Cluster fields to CPU metrics for core identification
  • Introduced cluster-level frequency aggregation in status response
  • Added Clusters field to status payload structure
  • Fixed indentation for freqCollector initialization
+20/-8   
collector.go
Extended frequency data structures with cluster support   

pkg/cpufreq/collector.go

  • Added Label and Cluster fields to CoreFrequency struct
  • Introduced ClusterFrequency struct for cluster-level metrics
  • Updated Snapshot to include both cores and clusters
  • Added hostfreq as new data source option
+12/-2   
hostfreq_darwin.go
Added caching and cluster frequency collection for Darwin

pkg/cpufreq/hostfreq_darwin.go

  • Implemented caching mechanism with 2-second TTL and mutex protection
  • Added cluster frequency collection from hostfreq payload
  • Enhanced validation with NaN/Inf checks and minimum interval
    enforcement
  • Introduced clusterFromLabel function to extract cluster names from
    core labels
+132/-2 
metrics.go
Extended metrics models with cluster-level CPU data           

pkg/models/metrics.go

  • Added Label and Cluster fields to CPUMetric struct
  • Introduced new CPUClusterMetric struct with comprehensive
    documentation
  • Added Clusters field to SysmonMetrics with omitempty tag
+23/-0   

Imported from GitHub pull request. Original GitHub pull request: #1758 Original author: @mfreeman451 Original URL: https://github.com/carverauto/serviceradar/pull/1758 Original created: 2025-10-14T03:37:27Z Original updated: 2025-10-14T06:28:44Z Original head: carverauto/serviceradar:update/troubleshooting_sysmonvm Original base: main Original merged: 2025-10-14T06:28:41Z by @mfreeman451 --- ### **PR Type** Enhancement ___ ### **Description** - Enhanced CPU metrics with cluster-level aggregation and core labeling - Added caching mechanism for hostfreq data collection with 2-second TTL - Improved data validation and error handling for frequency measurements - Fixed indentation and formatting issues in service initialization ___ ### Diagram Walkthrough ```mermaid flowchart LR collector["CPU Frequency Collector"] --> hostfreq["hostfreq Darwin"] hostfreq --> cache["Cache Layer (2s TTL)"] cache --> snapshot["Snapshot with Clusters"] snapshot --> service["Sysmon Service"] service --> metrics["Enhanced CPU Metrics"] ``` <details> <summary><h3> File Walkthrough</h3></summary> <table><thead><tr><th></th><th align="left">Relevant files</th></tr></thead><tbody><tr><td><strong>Enhancement</strong></td><td><table> <tr> <td> <details> <summary><strong>service.go</strong><dd><code>Enhanced CPU metrics with cluster support</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> pkg/checker/sysmonvm/service.go <ul><li>Added <code>Label</code> and <code>Cluster</code> fields to CPU metrics for core identification<br> <li> Introduced cluster-level frequency aggregation in status response<br> <li> Added <code>Clusters</code> field to status payload structure<br> <li> Fixed indentation for <code>freqCollector</code> initialization</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1758/files#diff-02211c03421188a7a92d44ce12a4ee8d9f09b224d7176b8713fcefac759ee989">+20/-8</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>collector.go</strong><dd><code>Extended frequency data structures with cluster support</code>&nbsp; &nbsp; </dd></summary> <hr> pkg/cpufreq/collector.go <ul><li>Added <code>Label</code> and <code>Cluster</code> fields to <code>CoreFrequency</code> struct<br> <li> Introduced <code>ClusterFrequency</code> struct for cluster-level metrics<br> <li> Updated <code>Snapshot</code> to include both cores and clusters<br> <li> Added <code>hostfreq</code> as new data source option</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1758/files#diff-35e6ecb698e97eb56fac27c2c42a1cb402cb477e408fc5bf08d7a7b233241b55">+12/-2</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>hostfreq_darwin.go</strong><dd><code>Added caching and cluster frequency collection for Darwin</code></dd></summary> <hr> pkg/cpufreq/hostfreq_darwin.go <ul><li>Implemented caching mechanism with 2-second TTL and mutex protection<br> <li> Added cluster frequency collection from hostfreq payload<br> <li> Enhanced validation with NaN/Inf checks and minimum interval <br>enforcement<br> <li> Introduced <code>clusterFromLabel</code> function to extract cluster names from <br>core labels</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1758/files#diff-122a8c8a7dd762a68dcd190e7ae30e13b5ba72c839951b577d7c17266fcb5483">+132/-2</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>metrics.go</strong><dd><code>Extended metrics models with cluster-level CPU data</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> pkg/models/metrics.go <ul><li>Added <code>Label</code> and <code>Cluster</code> fields to <code>CPUMetric</code> struct<br> <li> Introduced new <code>CPUClusterMetric</code> struct with comprehensive <br>documentation<br> <li> Added <code>Clusters</code> field to <code>SysmonMetrics</code> with omitempty tag</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1758/files#diff-1c5e2a501867b2fe257cc13a4ec0a49edf5b97f7c8c1c511e0787803fa692314">+23/-0</a>&nbsp; &nbsp; </td> </tr> </table></td></tr></tr></tbody></table> </details> ___
qodo-code-review[bot] commented 2025-10-14 03:38:02 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/1758#issuecomment-3399988343
Original created: 2025-10-14T03:38:02Z

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
Cross-tenant data leak

Description: The in-memory cache hostfreqCache stores the latest frequency snapshot globally without
size/expiry enforcement beyond TTL and without context scoping, which could expose
cross-tenant data if used in multi-tenant agents or long-lived processes sharing the same
address space.
hostfreq_darwin.go [226-246]

Referred Code
	hostfreqCache.mu.Lock()
	defer hostfreqCache.mu.Unlock()

	if hostfreqCache.snapshot == nil {
		return nil
	}

	if time.Since(hostfreqCache.fetched) > hostfreqCacheTTL {
		return nil
	}

	return snapshotClone(hostfreqCache.snapshot)
}

func hostfreqCacheStore(snapshot *Snapshot) {
	hostfreqCache.mu.Lock()
	defer hostfreqCache.mu.Unlock()

	hostfreqCache.snapshot = snapshotClone(snapshot)
	hostfreqCache.fetched = time.Now()
}
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
No custom compliance provided

Follow the guide to enable custom compliance check.

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
- Requires Further Human Verification
🏷️ - Compliance label
Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/1758#issuecomment-3399988343 Original created: 2025-10-14T03:38:02Z --- ## PR Compliance Guide 🔍 <!-- https://github.com/carverauto/serviceradar/commit/13fe90b49631eb870ee7e207d5be5ac742e5e9d4 --> Below is a summary of compliance checks for this PR:<br> <table><tbody><tr><td colspan='2'><strong>Security Compliance</strong></td></tr> <tr><td rowspan=1>⚪</td> <td><details><summary><strong>Cross-tenant data leak </strong></summary><br> <b>Description:</b> The in-memory cache <code>hostfreqCache</code> stores the latest frequency snapshot globally without <br>size/expiry enforcement beyond TTL and without context scoping, which could expose <br>cross-tenant data if used in multi-tenant agents or long-lived processes sharing the same <br>address space.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1758/files#diff-122a8c8a7dd762a68dcd190e7ae30e13b5ba72c839951b577d7c17266fcb5483R226-R246'>hostfreq_darwin.go [226-246]</a></strong><br> <details open><summary>Referred Code</summary> ```go hostfreqCache.mu.Lock() defer hostfreqCache.mu.Unlock() if hostfreqCache.snapshot == nil { return nil } if time.Since(hostfreqCache.fetched) > hostfreqCacheTTL { return nil } return snapshotClone(hostfreqCache.snapshot) } func hostfreqCacheStore(snapshot *Snapshot) { hostfreqCache.mu.Lock() defer hostfreqCache.mu.Unlock() hostfreqCache.snapshot = snapshotClone(snapshot) hostfreqCache.fetched = time.Now() } ``` </details></details></td></tr> <tr><td colspan='2'><strong>Ticket Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary>🎫 <strong>No ticket provided </summary></strong> - [ ] Create ticket/issue <!-- /create_ticket --create_ticket=true --> </details></td></tr> <tr><td colspan='2'><strong>Codebase Duplication Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary><strong>Codebase context is not defined </strong></summary> Follow the <a href='https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/'>guide</a> to enable codebase context checks. </details></td></tr> <tr><td colspan='2'><strong>Custom Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary><strong>No custom compliance provided</strong></summary> Follow the <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/'>guide</a> to enable custom compliance check. </details></td></tr> <tr><td align="center" colspan="2"> - [ ] Update <!-- /compliance --update_compliance=true --> </td></tr></tbody></table> <details><summary>Compliance status legend</summary> 🟢 - Fully Compliant<br> 🟡 - Partial Compliant<br> 🔴 - Not Compliant<br> ⚪ - Requires Further Human Verification<br> 🏷️ - Compliance label<br> </details>
qodo-code-review[bot] commented 2025-10-14 03:39:03 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/1758#issuecomment-3399989774
Original created: 2025-10-14T03:39:03Z

PR Code Suggestions

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Simplify data models to reduce redundancy

Remove redundant metadata fields like Timestamp, HostID, and AgentID from the
CPUMetric and CPUClusterMetric models. This data should be placed in a
higher-level response structure to reduce payload size and improve data
normalization.

Examples:

pkg/models/metrics.go [73-92]
type CPUMetric struct {
	// ID number of the CPU core
	CoreID int32 `json:"core_id" example:"0"`
	// Platform label for the CPU core (e.g., ECPU0, PCPU3)
	Label string `json:"label,omitempty" example:"ECPU0"`
	// Cluster identifier this core belongs to (e.g., ECPU, PCPU)
	Cluster string `json:"cluster,omitempty" example:"ECPU"`
	// Usage percentage (0-100)
	UsagePercent float64 `json:"usage_percent" example:"45.2"`
	// Instantaneous frequency in Hz, if available.

 ... (clipped 10 lines)
pkg/models/metrics.go [94-109]
// CPUClusterMetric represents aggregated CPU cluster telemetry.
// @Description Aggregated metrics for a logical CPU cluster (e.g., efficiency or performance cores).
type CPUClusterMetric struct {
	// Cluster name (e.g., ECPU, PCPU)
	Name string `json:"name" example:"ECPU"`
	// Instantaneous frequency in Hz, if available.
	FrequencyHz float64 `json:"frequency_hz" example:"1700000000"`
	// When this metric was collected
	Timestamp time.Time `json:"timestamp" example:"2025-04-24T14:15:22Z"`
	// Host identifier for the agent that collected this metric

 ... (clipped 6 lines)

Solution Walkthrough:

Before:

// pkg/models/metrics.go
type CPUMetric struct {
    CoreID       int32
    Label        string
    UsagePercent float64
    FrequencyHz  float64
    // Redundant fields for each core
    Timestamp    time.Time
    HostID       string
    HostIP       string
    AgentID      string
}

type CPUClusterMetric struct {
    Name         string
    FrequencyHz  float64
    // Redundant fields for each cluster
    Timestamp    time.Time
    HostID       string
}

After:

// pkg/models/metrics.go
type CPUMetric struct {
    CoreID       int32
    Label        string
    UsagePercent float64
    FrequencyHz  float64
}

type CPUClusterMetric struct {
    Name         string
    FrequencyHz  float64
}

// In a higher-level response struct
type SysmonStatus struct {
    Timestamp string
    HostID    string
    HostIP    string
    CPUs      []CPUMetric
    Clusters  []CPUClusterMetric
    // ... other metrics
}

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies the introduction of redundant fields in core data models, which is a significant design flaw that could lead to data bloat and inconsistencies, even if not currently populated.

Medium
General
Remove cloning on write to cache
Suggestion Impact:The commit removed the hostfreq cache entirely (including both read- and write-side cloning), replacing it with a buffered sampler. This makes the specific write-side clone removal moot by eliminating the caching mechanism, but it addresses the underlying inefficiency more broadly.

code diff:

-func hostfreqCacheSnapshot() *Snapshot {
-	hostfreqCache.mu.Lock()
-	defer hostfreqCache.mu.Unlock()
-
-	if hostfreqCache.snapshot == nil {
-		return nil
-	}
-
-	if time.Since(hostfreqCache.fetched) > hostfreqCacheTTL {
-		return nil
-	}
-
-	return snapshotClone(hostfreqCache.snapshot)
-}
-
-func hostfreqCacheStore(snapshot *Snapshot) {
-	hostfreqCache.mu.Lock()
-	defer hostfreqCache.mu.Unlock()
-
-	hostfreqCache.snapshot = snapshotClone(snapshot)
-	hostfreqCache.fetched = time.Now()
-}
-
-func snapshotClone(src *Snapshot) *Snapshot {
-	if src == nil {
-		return nil
-	}
-
-	out := &Snapshot{
-		Cores:    make([]CoreFrequency, len(src.Cores)),
-		Clusters: make([]ClusterFrequency, len(src.Clusters)),
-	}
-	copy(out.Cores, src.Cores)
-	copy(out.Clusters, src.Clusters)
-	return out
-}

Modify hostfreqCacheStore to store the snapshot pointer directly instead of a
clone, as cloning is already handled on read, thus removing the redundant
write-side clone.

pkg/cpufreq/hostfreq_darwin.go [240-246]

 func hostfreqCacheStore(snapshot *Snapshot) {
 	hostfreqCache.mu.Lock()
 	defer hostfreqCache.mu.Unlock()
 
-	hostfreqCache.snapshot = snapshotClone(snapshot)
+	hostfreqCache.snapshot = snapshot
 	hostfreqCache.fetched = time.Now()
 }

[Suggestion processed]

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies a redundant clone on write to the cache and proposes removing it, which is a standard and more efficient caching pattern.

Medium
Avoid redundant snapshot cloning for efficiency
Suggestion Impact:The commit refactored caching to a sampler and, after creating a snapshot, records it and returns snapshotClone(snapshot) directly, avoiding a redundant cache read/clone. This aligns with the suggestion’s intent.

code diff:

+	snapshot, err := collectHostfreqSnapshot(ctx, window)
+	if err != nil {
+		return nil, err
+	}
+
+	hostfreqSampler.record(snapshot, time.Now())
+
+	return snapshotClone(snapshot), nil
+}

In collectViaHostfreq, avoid a redundant snapshot clone by returning a clone of
the local snapshot variable directly, instead of reading it back from the cache.

pkg/cpufreq/hostfreq_darwin.go [139-146]

 snapshot, err := payloadToSnapshot(payload)
 if err != nil {
 	return nil, err
 }
 
 hostfreqCacheStore(snapshot)
 
-return hostfreqCacheSnapshot(), nil
+// Return a clone of the newly created snapshot to ensure the caller
+// gets a fresh copy, consistent with a cache hit.
+return snapshotClone(snapshot), nil

[Suggestion processed]

Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies a redundant cloning operation and proposes a valid fix that improves performance by avoiding an unnecessary cache read and lock cycle.

Low
  • Update
Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/1758#issuecomment-3399989774 Original created: 2025-10-14T03:39:03Z --- ## PR Code Suggestions ✨ <!-- 13fe90b --> Explore these optional code suggestions: <table><thead><tr><td><strong>Category</strong></td><td align=left><strong>Suggestion&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </strong></td><td align=center><strong>Impact</strong></td></tr><tbody><tr><td rowspan=1>High-level</td> <td> <details><summary>Simplify data models to reduce redundancy</summary> ___ **Remove redundant metadata fields like <code>Timestamp</code>, <code>HostID</code>, and <code>AgentID</code> from the <br><code>CPUMetric</code> and <code>CPUClusterMetric</code> models. This data should be placed in a <br>higher-level response structure to reduce payload size and improve data <br>normalization.** ### Examples: <details> <summary> <a href="https://github.com/carverauto/serviceradar/pull/1758/files#diff-1c5e2a501867b2fe257cc13a4ec0a49edf5b97f7c8c1c511e0787803fa692314R73-R92">pkg/models/metrics.go [73-92]</a> </summary> ```go type CPUMetric struct { // ID number of the CPU core CoreID int32 `json:"core_id" example:"0"` // Platform label for the CPU core (e.g., ECPU0, PCPU3) Label string `json:"label,omitempty" example:"ECPU0"` // Cluster identifier this core belongs to (e.g., ECPU, PCPU) Cluster string `json:"cluster,omitempty" example:"ECPU"` // Usage percentage (0-100) UsagePercent float64 `json:"usage_percent" example:"45.2"` // Instantaneous frequency in Hz, if available. ... (clipped 10 lines) ``` </details> <details> <summary> <a href="https://github.com/carverauto/serviceradar/pull/1758/files#diff-1c5e2a501867b2fe257cc13a4ec0a49edf5b97f7c8c1c511e0787803fa692314R94-R109">pkg/models/metrics.go [94-109]</a> </summary> ```go // CPUClusterMetric represents aggregated CPU cluster telemetry. // @Description Aggregated metrics for a logical CPU cluster (e.g., efficiency or performance cores). type CPUClusterMetric struct { // Cluster name (e.g., ECPU, PCPU) Name string `json:"name" example:"ECPU"` // Instantaneous frequency in Hz, if available. FrequencyHz float64 `json:"frequency_hz" example:"1700000000"` // When this metric was collected Timestamp time.Time `json:"timestamp" example:"2025-04-24T14:15:22Z"` // Host identifier for the agent that collected this metric ... (clipped 6 lines) ``` </details> ### Solution Walkthrough: #### Before: ```go // pkg/models/metrics.go type CPUMetric struct { CoreID int32 Label string UsagePercent float64 FrequencyHz float64 // Redundant fields for each core Timestamp time.Time HostID string HostIP string AgentID string } type CPUClusterMetric struct { Name string FrequencyHz float64 // Redundant fields for each cluster Timestamp time.Time HostID string } ``` #### After: ```go // pkg/models/metrics.go type CPUMetric struct { CoreID int32 Label string UsagePercent float64 FrequencyHz float64 } type CPUClusterMetric struct { Name string FrequencyHz float64 } // In a higher-level response struct type SysmonStatus struct { Timestamp string HostID string HostIP string CPUs []CPUMetric Clusters []CPUClusterMetric // ... other metrics } ``` <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: The suggestion correctly identifies the introduction of redundant fields in core data models, which is a significant design flaw that could lead to data bloat and inconsistencies, even if not currently populated. </details></details></td><td align=center>Medium </td></tr><tr><td rowspan=2>General</td> <td> <details><summary>✅ <s>Remove cloning on write to cache<!-- not_implemented --></s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The commit removed the hostfreq cache entirely (including both read- and write-side cloning), replacing it with a buffered sampler. This makes the specific write-side clone removal moot by eliminating the caching mechanism, but it addresses the underlying inefficiency more broadly. code diff: ```diff -func hostfreqCacheSnapshot() *Snapshot { - hostfreqCache.mu.Lock() - defer hostfreqCache.mu.Unlock() - - if hostfreqCache.snapshot == nil { - return nil - } - - if time.Since(hostfreqCache.fetched) > hostfreqCacheTTL { - return nil - } - - return snapshotClone(hostfreqCache.snapshot) -} - -func hostfreqCacheStore(snapshot *Snapshot) { - hostfreqCache.mu.Lock() - defer hostfreqCache.mu.Unlock() - - hostfreqCache.snapshot = snapshotClone(snapshot) - hostfreqCache.fetched = time.Now() -} - -func snapshotClone(src *Snapshot) *Snapshot { - if src == nil { - return nil - } - - out := &Snapshot{ - Cores: make([]CoreFrequency, len(src.Cores)), - Clusters: make([]ClusterFrequency, len(src.Clusters)), - } - copy(out.Cores, src.Cores) - copy(out.Clusters, src.Clusters) - return out -} ``` </details> ___ **Modify <code>hostfreqCacheStore</code> to store the snapshot pointer directly instead of a <br>clone, as cloning is already handled on read, thus removing the redundant <br>write-side clone.** [pkg/cpufreq/hostfreq_darwin.go [240-246]](https://github.com/carverauto/serviceradar/pull/1758/files#diff-122a8c8a7dd762a68dcd190e7ae30e13b5ba72c839951b577d7c17266fcb5483R240-R246) ```diff func hostfreqCacheStore(snapshot *Snapshot) { hostfreqCache.mu.Lock() defer hostfreqCache.mu.Unlock() - hostfreqCache.snapshot = snapshotClone(snapshot) + hostfreqCache.snapshot = snapshot hostfreqCache.fetched = time.Now() } ``` `[Suggestion processed]` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly identifies a redundant clone on write to the cache and proposes removing it, which is a standard and more efficient caching pattern. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>✅ <s>Avoid redundant snapshot cloning for efficiency</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The commit refactored caching to a sampler and, after creating a snapshot, records it and returns snapshotClone(snapshot) directly, avoiding a redundant cache read/clone. This aligns with the suggestion’s intent. code diff: ```diff + snapshot, err := collectHostfreqSnapshot(ctx, window) + if err != nil { + return nil, err + } + + hostfreqSampler.record(snapshot, time.Now()) + + return snapshotClone(snapshot), nil +} ``` </details> ___ **In <code>collectViaHostfreq</code>, avoid a redundant snapshot clone by returning a clone of <br>the local <code>snapshot</code> variable directly, instead of reading it back from the cache.** [pkg/cpufreq/hostfreq_darwin.go [139-146]](https://github.com/carverauto/serviceradar/pull/1758/files#diff-122a8c8a7dd762a68dcd190e7ae30e13b5ba72c839951b577d7c17266fcb5483R139-R146) ```diff snapshot, err := payloadToSnapshot(payload) if err != nil { return nil, err } hostfreqCacheStore(snapshot) -return hostfreqCacheSnapshot(), nil +// Return a clone of the newly created snapshot to ensure the caller +// gets a fresh copy, consistent with a cache hit. +return snapshotClone(snapshot), nil ``` `[Suggestion processed]` <details><summary>Suggestion importance[1-10]: 6</summary> __ Why: The suggestion correctly identifies a redundant cloning operation and proposes a valid fix that improves performance by avoiding an unnecessary cache read and lock cycle. </details></details></td><td align=center>Low </td></tr> <tr><td align="center" colspan="2"> - [ ] Update <!-- /improve_multi --more_suggestions=true --> </td><td></td></tr></tbody></table>
Sign in to join this conversation.
No reviewers
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
carverauto/serviceradar!2318
No description provided.