k8s updates, nats updates for otel to create stream if missing, poller #2319

Merged
mfreeman451 merged 4 commits from refs/pull/2319/head into main 2025-10-14 20:29:34 +00:00
mfreeman451 commented 2025-10-14 18:42:54 +00:00 (Migrated from github.com)
Owner

Imported from GitHub pull request.

Original GitHub pull request: #1759
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/pull/1759
Original created: 2025-10-14T18:42:54Z
Original updated: 2025-10-14T20:29:37Z
Original head: carverauto/serviceradar:update/stability_core_updates
Original base: main
Original merged: 2025-10-14T20:29:34Z by @mfreeman451

User description

stability fixes


PR Type

Enhancement, Bug fix


Description

  • Enhanced NATS stream configuration with retention policies (max_bytes, max_age)

  • Added thread-safe core client access with nil checks in poller

  • Increased Kubernetes resource limits for core, KV, and NATS services

  • Improved error handling for unavailable core client connections


Diagram Walkthrough

flowchart LR
  poller["Poller Service"] -- "thread-safe access" --> coreClient["Core Client"]
  otel["OTEL Service"] -- "stream config" --> nats["NATS JetStream"]
  nats -- "retention policies" --> stream["Events Stream"]
  k8s["K8s Resources"] -- "increased limits" --> services["Core/KV/NATS"]

File Walkthrough

Relevant files
Bug fix
2 files
poller.go
Add thread-safe core client access and error handling       
+25/-4   
serviceradar-kong.yaml
Add file permissions for Kong configuration                           
+2/-0     
Enhancement
2 files
config.rs
Add NATS stream retention configuration fields                     
+23/-0   
nats_output.rs
Implement stream retention policy updates and validation 
+40/-12 
Configuration changes
7 files
entrypoint-db-event-writer.sh
Make wait-for service attempts configurable via environment
+4/-2     
docker-compose.yml
Add memory limits and CPU constraints to services               
+3/-0     
otel.docker.toml
Configure NATS stream retention parameters                             
+7/-1     
configmap.yaml
Remove NATS health check and add retention settings           
+7/-2     
serviceradar-core.yaml
Increase CPU and memory resource limits                                   
+4/-4     
serviceradar-kv.yaml
Increase CPU resource limits for KV service                           
+2/-2     
serviceradar-nats.yaml
Increase CPU resource limits for NATS service                       
+2/-2     

Imported from GitHub pull request. Original GitHub pull request: #1759 Original author: @mfreeman451 Original URL: https://github.com/carverauto/serviceradar/pull/1759 Original created: 2025-10-14T18:42:54Z Original updated: 2025-10-14T20:29:37Z Original head: carverauto/serviceradar:update/stability_core_updates Original base: main Original merged: 2025-10-14T20:29:34Z by @mfreeman451 --- ### **User description** stability fixes ___ ### **PR Type** Enhancement, Bug fix ___ ### **Description** - Enhanced NATS stream configuration with retention policies (`max_bytes`, `max_age`) - Added thread-safe core client access with nil checks in poller - Increased Kubernetes resource limits for core, KV, and NATS services - Improved error handling for unavailable core client connections ___ ### Diagram Walkthrough ```mermaid flowchart LR poller["Poller Service"] -- "thread-safe access" --> coreClient["Core Client"] otel["OTEL Service"] -- "stream config" --> nats["NATS JetStream"] nats -- "retention policies" --> stream["Events Stream"] k8s["K8s Resources"] -- "increased limits" --> services["Core/KV/NATS"] ``` <details> <summary><h3> File Walkthrough</h3></summary> <table><thead><tr><th></th><th align="left">Relevant files</th></tr></thead><tbody><tr><td><strong>Bug fix</strong></td><td><details><summary>2 files</summary><table> <tr> <td><strong>poller.go</strong><dd><code>Add thread-safe core client access and error handling</code>&nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1759/files#diff-28a10dea1596540e55ce9a8b68bd1af3d96bd4634f6def668643892cef25a086">+25/-4</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>serviceradar-kong.yaml</strong><dd><code>Add file permissions for Kong configuration</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1759/files#diff-e1426b570f5caae8eee31f02984392f1bc68d0c329ae2f1118de3272d654856e">+2/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Enhancement</strong></td><td><details><summary>2 files</summary><table> <tr> <td><strong>config.rs</strong><dd><code>Add NATS stream retention configuration fields</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1759/files#diff-abbaec651da3d6af96b482e0f77bb909b65dbe0cabd78b5803769cc9dab0a1b0">+23/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>nats_output.rs</strong><dd><code>Implement stream retention policy updates and validation</code>&nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1759/files#diff-6b585ea3564a481174e04da1270e2e13edd4e2b980d02a2652d6d21e6d82a498">+40/-12</a>&nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Configuration changes</strong></td><td><details><summary>7 files</summary><table> <tr> <td><strong>entrypoint-db-event-writer.sh</strong><dd><code>Make wait-for service attempts configurable via environment</code></dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1759/files#diff-a76a07ca0b18c5d7d9cf0ba3f1a3f9330307be7acd0ca3d7a6be7b67c84f81af">+4/-2</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>docker-compose.yml</strong><dd><code>Add memory limits and CPU constraints to services</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1759/files#diff-e45e45baeda1c1e73482975a664062aa56f20c03dd9d64a827aba57775bed0d3">+3/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>otel.docker.toml</strong><dd><code>Configure NATS stream retention parameters</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1759/files#diff-d4af38790e3657b7589cd37a7539d5308b032f11caba7aa740ddc86bf99f4415">+7/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>configmap.yaml</strong><dd><code>Remove NATS health check and add retention settings</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1759/files#diff-f4548beaa0a3a01a46971c82c5647a0f3f49eb38d66dd939d06d19018173fcd6">+7/-2</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>serviceradar-core.yaml</strong><dd><code>Increase CPU and memory resource limits</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1759/files#diff-2f484d8fe3bae65aace437568f6dd660c92f57b452f7bd1608083a8fe3716ba3">+4/-4</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>serviceradar-kv.yaml</strong><dd><code>Increase CPU resource limits for KV service</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1759/files#diff-0bedf1050e417709a3d4f2c98946604aa17a974a55dce16fd3447e55bcb80b0b">+2/-2</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>serviceradar-nats.yaml</strong><dd><code>Increase CPU resource limits for NATS service</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1759/files#diff-48984f0444e9f5e0d051d71ee217f64c5dfab202889db4564e6c1a7a6a248b05">+2/-2</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></details></td></tr></tr></tbody></table> </details> ___
qodo-code-review[bot] commented 2025-10-14 18:43:41 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/1759#issuecomment-3403152723
Original created: 2025-10-14T18:43:41Z

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
Resource exhaustion

Description: The stream retention limits use very large defaults (max_bytes ≈ 2 GiB, max_age ≈ 30
minutes) which could still allow high storage growth if subjects proliferate; ensure
quotas align with storage capacity and that unauthorized config escalation is not possible
through untrusted config sources.
nats_output.rs [68-136]

Referred Code
let desired_config = jetstream::stream::Config {
    name: config.stream.clone(),
    subjects: subjects.clone(),
    storage: StorageType::File,
    max_bytes: config.max_bytes,
    max_age: config.max_age,
    ..Default::default()
};

// Try to get or create the stream
match jetstream.get_or_create_stream(desired_config.clone()).await {
    Ok(mut stream) => {
        let stream_info = stream.info().await?;
        let existing_subjects = &stream_info.config.subjects;
        let mut needs_update = false;
        let mut updated_config = stream_info.config.clone();

        let missing_subjects: Vec<_> = subjects
            .iter()
            .filter(|s| !existing_subjects.contains(s))
            .collect();


 ... (clipped 48 lines)
DoS via infinite wait

Description: Allowing unlimited or very high connection wait attempts via environment variables
(default 0 meaning potentially infinite) can cause denial of service or stuck startup if
services are unreachable; consider sane upper bounds or timeouts.
entrypoint-db-event-writer.sh [52-76]

Referred Code
    NATS_ATTEMPTS="${WAIT_FOR_NATS_ATTEMPTS:-0}"
    if wait-for-port \
        --host "${NATS_HOST_VALUE}" \
        --port "${NATS_PORT_VALUE}" \
        --attempts "${NATS_ATTEMPTS}" \
        --interval 2s \
        --quiet; then
        echo "NATS service is ready!"
    else
        echo "ERROR: Timed out waiting for NATS at ${NATS_HOST_VALUE}:${NATS_PORT_VALUE}" >&2
        exit 1
    fi
fi

if [ -n "${WAIT_FOR_PROTON:-}" ]; then
    PROTON_HOST_VALUE=$(resolve_service_host "serviceradar-proton" PROTON_HOST "proton")
    PROTON_PORT_VALUE=$(resolve_service_port PROTON_PORT "9440")
    echo "Waiting for Proton database at ${PROTON_HOST_VALUE}:${PROTON_PORT_VALUE}..."

    PROTON_ATTEMPTS="${WAIT_FOR_PROTON_ATTEMPTS:-0}"
    if wait-for-port \


 ... (clipped 4 lines)
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
No custom compliance provided

Follow the guide to enable custom compliance check.

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
- Requires Further Human Verification
🏷️ - Compliance label
Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/1759#issuecomment-3403152723 Original created: 2025-10-14T18:43:41Z --- ## PR Compliance Guide 🔍 <!-- https://github.com/carverauto/serviceradar/commit/eee9a8b95f43a9a89b65be324811742fec8570f9 --> Below is a summary of compliance checks for this PR:<br> <table><tbody><tr><td colspan='2'><strong>Security Compliance</strong></td></tr> <tr><td rowspan=2>⚪</td> <td><details><summary><strong>Resource exhaustion </strong></summary><br> <b>Description:</b> The stream retention limits use very large defaults (max_bytes ≈ 2 GiB, max_age ≈ 30 <br>minutes) which could still allow high storage growth if subjects proliferate; ensure <br>quotas align with storage capacity and that unauthorized config escalation is not possible <br>through untrusted config sources.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1759/files#diff-6b585ea3564a481174e04da1270e2e13edd4e2b980d02a2652d6d21e6d82a498R68-R136'>nats_output.rs [68-136]</a></strong><br> <details open><summary>Referred Code</summary> ```rust let desired_config = jetstream::stream::Config { name: config.stream.clone(), subjects: subjects.clone(), storage: StorageType::File, max_bytes: config.max_bytes, max_age: config.max_age, ..Default::default() }; // Try to get or create the stream match jetstream.get_or_create_stream(desired_config.clone()).await { Ok(mut stream) => { let stream_info = stream.info().await?; let existing_subjects = &stream_info.config.subjects; let mut needs_update = false; let mut updated_config = stream_info.config.clone(); let missing_subjects: Vec<_> = subjects .iter() .filter(|s| !existing_subjects.contains(s)) .collect(); ... (clipped 48 lines) ``` </details></details></td></tr> <tr><td><details><summary><strong>DoS via infinite wait </strong></summary><br> <b>Description:</b> Allowing unlimited or very high connection wait attempts via environment variables <br>(default 0 meaning potentially infinite) can cause denial of service or stuck startup if <br>services are unreachable; consider sane upper bounds or timeouts.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1759/files#diff-a76a07ca0b18c5d7d9cf0ba3f1a3f9330307be7acd0ca3d7a6be7b67c84f81afR52-R76'>entrypoint-db-event-writer.sh [52-76]</a></strong><br> <details open><summary>Referred Code</summary> ```shell NATS_ATTEMPTS="${WAIT_FOR_NATS_ATTEMPTS:-0}" if wait-for-port \ --host "${NATS_HOST_VALUE}" \ --port "${NATS_PORT_VALUE}" \ --attempts "${NATS_ATTEMPTS}" \ --interval 2s \ --quiet; then echo "NATS service is ready!" else echo "ERROR: Timed out waiting for NATS at ${NATS_HOST_VALUE}:${NATS_PORT_VALUE}" >&2 exit 1 fi fi if [ -n "${WAIT_FOR_PROTON:-}" ]; then PROTON_HOST_VALUE=$(resolve_service_host "serviceradar-proton" PROTON_HOST "proton") PROTON_PORT_VALUE=$(resolve_service_port PROTON_PORT "9440") echo "Waiting for Proton database at ${PROTON_HOST_VALUE}:${PROTON_PORT_VALUE}..." PROTON_ATTEMPTS="${WAIT_FOR_PROTON_ATTEMPTS:-0}" if wait-for-port \ ... (clipped 4 lines) ``` </details></details></td></tr> <tr><td colspan='2'><strong>Ticket Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary>🎫 <strong>No ticket provided </summary></strong> - [ ] Create ticket/issue <!-- /create_ticket --create_ticket=true --> </details></td></tr> <tr><td colspan='2'><strong>Codebase Duplication Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary><strong>Codebase context is not defined </strong></summary> Follow the <a href='https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/'>guide</a> to enable codebase context checks. </details></td></tr> <tr><td colspan='2'><strong>Custom Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary><strong>No custom compliance provided</strong></summary> Follow the <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/'>guide</a> to enable custom compliance check. </details></td></tr> <tr><td align="center" colspan="2"> - [ ] Update <!-- /compliance --update_compliance=true --> </td></tr></tbody></table> <details><summary>Compliance status legend</summary> 🟢 - Fully Compliant<br> 🟡 - Partial Compliant<br> 🔴 - Not Compliant<br> ⚪ - Requires Further Human Verification<br> 🏷️ - Compliance label<br> </details>
qodo-code-review[bot] commented 2025-10-14 18:44:52 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/1759#issuecomment-3403155904
Original created: 2025-10-14T18:44:52Z

PR Code Suggestions

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Restore a sensible default for retries
Suggestion Impact:The commit changed NATS_ATTEMPTS to use a non-zero default via DEFAULT_WAIT_ATTEMPTS (60 by default), addressing the issue of 0 retries and restoring a resilient default, though not exactly to 30.

code diff:

-    NATS_ATTEMPTS="${WAIT_FOR_NATS_ATTEMPTS:-0}"
+    NATS_ATTEMPTS="${WAIT_FOR_NATS_ATTEMPTS:-$DEFAULT_WAIT_ATTEMPTS}"
     if wait-for-port \
         --host "${NATS_HOST_VALUE}" \
         --port "${NATS_PORT_VALUE}" \

In entrypoint-db-event-writer.sh, restore the default number of NATS connection
attempts to 30 instead of 0 to ensure service startup is resilient.

docker/compose/entrypoint-db-event-writer.sh [52]

-NATS_ATTEMPTS="${WAIT_FOR_NATS_ATTEMPTS:-0}"
+NATS_ATTEMPTS="${WAIT_FOR_NATS_ATTEMPTS:-30}"

[Suggestion processed]

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies that changing the default number of wait attempts from 30 to 0 makes the service startup brittle and likely to fail in environments with non-deterministic startup order.

Medium
General
Simplify function by using struct's client

*Refactor reportToCoreStreaming to be a method on Poller and use the struct's
p.coreClient directly, removing the need to pass the client as an argument and
eliminating a redundant nil check.

pkg/poller/poller.go [618-626]

-func (p *Poller) reportToCoreStreaming(ctx context.Context, coreClient proto.PollerServiceClient, statuses []*proto.ServiceStatus) error {
+func (p *Poller) reportToCoreStreaming(ctx context.Context, statuses []*proto.ServiceStatus) error {
+	coreClient := p.getCoreClient()
+	// The getCoreClient check in sendReport makes this redundant, but it's a safe guard.
 	if coreClient == nil {
 		return errCoreClientUnavailable
 	}
 
 	stream, err := coreClient.StreamStatus(ctx)
 	if err != nil {
 		return fmt.Errorf("failed to create stream to core: %w", err)
 	}
 ...

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 5

__

Why: The suggestion correctly points out redundant code and an opportunity to simplify the function signature, which improves code clarity and maintainability.

Low
  • Update
Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/1759#issuecomment-3403155904 Original created: 2025-10-14T18:44:52Z --- ## PR Code Suggestions ✨ <!-- eee9a8b --> Explore these optional code suggestions: <table><thead><tr><td><strong>Category</strong></td><td align=left><strong>Suggestion&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </strong></td><td align=center><strong>Impact</strong></td></tr><tbody><tr><td rowspan=1>Possible issue</td> <td> <details><summary>✅ <s>Restore a sensible default for retries</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The commit changed NATS_ATTEMPTS to use a non-zero default via DEFAULT_WAIT_ATTEMPTS (60 by default), addressing the issue of 0 retries and restoring a resilient default, though not exactly to 30. code diff: ```diff - NATS_ATTEMPTS="${WAIT_FOR_NATS_ATTEMPTS:-0}" + NATS_ATTEMPTS="${WAIT_FOR_NATS_ATTEMPTS:-$DEFAULT_WAIT_ATTEMPTS}" if wait-for-port \ --host "${NATS_HOST_VALUE}" \ --port "${NATS_PORT_VALUE}" \ ``` </details> ___ **In <code>entrypoint-db-event-writer.sh</code>, restore the default number of NATS connection <br>attempts to 30 instead of 0 to ensure service startup is resilient.** [docker/compose/entrypoint-db-event-writer.sh [52]](https://github.com/carverauto/serviceradar/pull/1759/files#diff-a76a07ca0b18c5d7d9cf0ba3f1a3f9330307be7acd0ca3d7a6be7b67c84f81afR52-R52) ```diff -NATS_ATTEMPTS="${WAIT_FOR_NATS_ATTEMPTS:-0}" +NATS_ATTEMPTS="${WAIT_FOR_NATS_ATTEMPTS:-30}" ``` `[Suggestion processed]` <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: The suggestion correctly identifies that changing the default number of wait attempts from 30 to 0 makes the service startup brittle and likely to fail in environments with non-deterministic startup order. </details></details></td><td align=center>Medium </td></tr><tr><td rowspan=1>General</td> <td> <details><summary>Simplify function by using struct's client</summary> ___ **Refactor <code>reportToCoreStreaming</code> to be a method on <code>*Poller</code> and use the struct's <br><code>p.coreClient</code> directly, removing the need to pass the client as an argument and <br>eliminating a redundant nil check.** [pkg/poller/poller.go [618-626]](https://github.com/carverauto/serviceradar/pull/1759/files#diff-28a10dea1596540e55ce9a8b68bd1af3d96bd4634f6def668643892cef25a086R618-R626) ```diff -func (p *Poller) reportToCoreStreaming(ctx context.Context, coreClient proto.PollerServiceClient, statuses []*proto.ServiceStatus) error { +func (p *Poller) reportToCoreStreaming(ctx context.Context, statuses []*proto.ServiceStatus) error { + coreClient := p.getCoreClient() + // The getCoreClient check in sendReport makes this redundant, but it's a safe guard. if coreClient == nil { return errCoreClientUnavailable } stream, err := coreClient.StreamStatus(ctx) if err != nil { return fmt.Errorf("failed to create stream to core: %w", err) } ... ``` `[To ensure code accuracy, apply this suggestion manually]` <details><summary>Suggestion importance[1-10]: 5</summary> __ Why: The suggestion correctly points out redundant code and an opportunity to simplify the function signature, which improves code clarity and maintainability. </details></details></td><td align=center>Low </td></tr> <tr><td align="center" colspan="2"> - [ ] Update <!-- /improve_multi --more_suggestions=true --> </td><td></td></tr></tbody></table>
Sign in to join this conversation.
No reviewers
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
carverauto/serviceradar!2319
No description provided.