fix: Caching flows with no template #2843

Closed
mikemiles-dev wants to merge 3 commits from refs/pull/2843/head into staging
mikemiles-dev commented 2026-02-03 22:32:44 +00:00 (Migrated from github.com)
Owner

Imported from GitHub pull request.

Original GitHub pull request: #2689
Original author: @mikemiles-dev
Original URL: https://github.com/carverauto/serviceradar/pull/2689
Original created: 2026-02-03T22:32:44Z
Original updated: 2026-02-05T06:54:50Z
Original head: mikemiles-dev/serviceradar:fix/ISSUE_2678
Original base: staging

User description

IMPORTANT: Please sign the Developer Certificate of Origin

Thank you for your contribution to ServiceRadar. Please note, when contributing, the developer must include
a DCO sign-off statement indicating the DCO acceptance in one commit message. Here
is an example DCO Signed-off-by line in a commit message:

Signed-off-by: J. Doe <j.doe@domain.com>

Describe your changes

Code checklist before requesting a review

  • I have signed the DCO?
  • The build completes without errors?
  • All tests are passing when running make test?

PR Type

Bug fix, Enhancement


Description

  • Add pending packet buffer to handle flows with missing templates

  • Implement template learning detection and packet retry mechanism

  • Add configurable TTL and max packet limits for pending buffer

  • Extract packet processing logic into reusable method


Diagram Walkthrough

flowchart LR
  A["NetFlow Packet Received"] --> B["Parse with Current Templates"]
  B --> C{Parse Successful?}
  C -->|Yes| D["Process & Send Flows"]
  C -->|No| E["Buffer Packet"]
  F["New Template Learned"] --> G["Retry Pending Packets"]
  G --> B
  E --> H["Sweep Expired Packets"]

File Walkthrough

Relevant files
Configuration changes
config.rs
Add pending packet buffer configuration options                   

rust/netflow-collector/src/config.rs

  • Add pending_packet_ttl_secs configuration with default 60 seconds
  • Add max_pending_packets configuration with default 100 packets
  • Add corresponding default value functions for serde deserialization
+14/-0   
Enhancement
listener.rs
Implement pending packet buffer with template-triggered retry

rust/netflow-collector/src/listener.rs

  • Add PendingPacketBuffer field to store packets awaiting templates
  • Add templates_learned atomic flag to track template learning events
  • Extract packet processing logic into process_parsed_packet() method
  • Implement retry_pending_packets() to re-parse buffered packets when
    templates arrive
  • Add sweep_pending_buffer() and get_pending_stats() public methods
  • Modify template event callback to set flag when templates are learned
  • Buffer packets on parse errors and retry after template learning
+183/-58
pending_buffer.rs
New pending packet buffer implementation with TTL management

rust/netflow-collector/src/pending_buffer.rs

  • Create new module with PendingPacketBuffer struct managing per-source
    packet queues
  • Implement packet addition with FIFO eviction at capacity limits
  • Implement expiration checking and sweep functionality based on TTL
  • Provide statistics reporting for pending packets and sources
  • Add comprehensive unit tests covering add, eviction, expiration, and
    stats
+162/-0 
metrics.rs
Add pending buffer metrics reporting                                         

rust/netflow-collector/src/metrics.rs

  • Call sweep_pending_buffer() during metrics reporting cycle
  • Log pending packet buffer statistics when packets are buffered
+10/-0   
Miscellaneous
main.rs
Register pending buffer module                                                     

rust/netflow-collector/src/main.rs

  • Add module declaration for new pending_buffer module
+1/-0     
Tests
publisher.rs
Update test configuration for pending buffer                         

rust/netflow-collector/src/publisher.rs

  • Add pending_packet_ttl_secs and max_pending_packets fields to test
    config
  • Remove obsolete nats_creds_file field from test config
+2/-1     

Imported from GitHub pull request. Original GitHub pull request: #2689 Original author: @mikemiles-dev Original URL: https://github.com/carverauto/serviceradar/pull/2689 Original created: 2026-02-03T22:32:44Z Original updated: 2026-02-05T06:54:50Z Original head: mikemiles-dev/serviceradar:fix/ISSUE_2678 Original base: staging --- ### **User description** ## IMPORTANT: Please sign the Developer Certificate of Origin Thank you for your contribution to ServiceRadar. Please note, when contributing, the developer must include a [DCO sign-off statement]( https://developercertificate.org/) indicating the DCO acceptance in one commit message. Here is an example DCO Signed-off-by line in a commit message: ``` Signed-off-by: J. Doe <j.doe@domain.com> ``` ## Describe your changes ## Issue ticket number and link ## Code checklist before requesting a review - [ ] I have signed the DCO? - [ ] The build completes without errors? - [ ] All tests are passing when running make test? ___ ### **PR Type** Bug fix, Enhancement ___ ### **Description** - Add pending packet buffer to handle flows with missing templates - Implement template learning detection and packet retry mechanism - Add configurable TTL and max packet limits for pending buffer - Extract packet processing logic into reusable method ___ ### Diagram Walkthrough ```mermaid flowchart LR A["NetFlow Packet Received"] --> B["Parse with Current Templates"] B --> C{Parse Successful?} C -->|Yes| D["Process & Send Flows"] C -->|No| E["Buffer Packet"] F["New Template Learned"] --> G["Retry Pending Packets"] G --> B E --> H["Sweep Expired Packets"] ``` <details><summary><h3>File Walkthrough</h3></summary> <table><thead><tr><th></th><th align="left">Relevant files</th></tr></thead><tbody><tr><td><strong>Configuration changes</strong></td><td><table> <tr> <td> <details> <summary><strong>config.rs</strong><dd><code>Add pending packet buffer configuration options</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/netflow-collector/src/config.rs <ul><li>Add <code>pending_packet_ttl_secs</code> configuration with default 60 seconds<br> <li> Add <code>max_pending_packets</code> configuration with default 100 packets<br> <li> Add corresponding default value functions for serde deserialization</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2689/files#diff-ea8fb7fc6e55836ada19b9c1ca26f2a88575289a3fee3005183916a8454d0fc9">+14/-0</a>&nbsp; &nbsp; </td> </tr> </table></td></tr><tr><td><strong>Enhancement</strong></td><td><table> <tr> <td> <details> <summary><strong>listener.rs</strong><dd><code>Implement pending packet buffer with template-triggered retry</code></dd></summary> <hr> rust/netflow-collector/src/listener.rs <ul><li>Add <code>PendingPacketBuffer</code> field to store packets awaiting templates<br> <li> Add <code>templates_learned</code> atomic flag to track template learning events<br> <li> Extract packet processing logic into <code>process_parsed_packet()</code> method<br> <li> Implement <code>retry_pending_packets()</code> to re-parse buffered packets when <br>templates arrive<br> <li> Add <code>sweep_pending_buffer()</code> and <code>get_pending_stats()</code> public methods<br> <li> Modify template event callback to set flag when templates are learned<br> <li> Buffer packets on parse errors and retry after template learning</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2689/files#diff-3ec4d0175a1f407a9c9246268da3491d1fdfacd2e8c00d536fde36357e993999">+183/-58</a></td> </tr> <tr> <td> <details> <summary><strong>pending_buffer.rs</strong><dd><code>New pending packet buffer implementation with TTL management</code></dd></summary> <hr> rust/netflow-collector/src/pending_buffer.rs <ul><li>Create new module with <code>PendingPacketBuffer</code> struct managing per-source <br>packet queues<br> <li> Implement packet addition with FIFO eviction at capacity limits<br> <li> Implement expiration checking and sweep functionality based on TTL<br> <li> Provide statistics reporting for pending packets and sources<br> <li> Add comprehensive unit tests covering add, eviction, expiration, and <br>stats</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2689/files#diff-651ab5c3b686e5207e26c5074beee37b1d97b44c586a4ef6c69a17d36f7cf37d">+162/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>metrics.rs</strong><dd><code>Add pending buffer metrics reporting</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/netflow-collector/src/metrics.rs <ul><li>Call <code>sweep_pending_buffer()</code> during metrics reporting cycle<br> <li> Log pending packet buffer statistics when packets are buffered</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2689/files#diff-d3fef810bfa2a5b985b71300423ec1d9c6eea321fee261710cd3257b521ad7e2">+10/-0</a>&nbsp; &nbsp; </td> </tr> </table></td></tr><tr><td><strong>Miscellaneous</strong></td><td><table> <tr> <td> <details> <summary><strong>main.rs</strong><dd><code>Register pending buffer module</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/netflow-collector/src/main.rs - Add module declaration for new `pending_buffer` module </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2689/files#diff-ec91a491c51b432333dc1b7add92effe5fdabfad9a7a807a1af3a05a9602c3bb">+1/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></td></tr><tr><td><strong>Tests</strong></td><td><table> <tr> <td> <details> <summary><strong>publisher.rs</strong><dd><code>Update test configuration for pending buffer</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/netflow-collector/src/publisher.rs <ul><li>Add <code>pending_packet_ttl_secs</code> and <code>max_pending_packets</code> fields to test <br>config<br> <li> Remove obsolete <code>nats_creds_file</code> field from test config</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2689/files#diff-10dec5866b4bc416c62590bdf2d3ded6c234b8d423ec4aebc70eac891ff0c07d">+2/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></td></tr></tbody></table> </details> ___
CLAassistant commented 2026-02-03 22:32:51 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @CLAassistant
Original URL: https://github.com/carverauto/serviceradar/pull/2689#issuecomment-3844086650
Original created: 2026-02-03T22:32:51Z

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


mikemiles-dev seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Imported GitHub PR comment. Original author: @CLAassistant Original URL: https://github.com/carverauto/serviceradar/pull/2689#issuecomment-3844086650 Original created: 2026-02-03T22:32:51Z --- [![CLA assistant check](https://cla-assistant.io/pull/badge/not_signed)](https://cla-assistant.io/carverauto/serviceradar?pullRequest=2689) <br/>Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our [Contributor License Agreement](https://cla-assistant.io/carverauto/serviceradar?pullRequest=2689) before we can accept your contribution.<br/><hr/>**mikemiles-dev** seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please [add the email address used for this commit to your account](https://help.github.com/articles/why-are-my-commits-linked-to-the-wrong-user/#commits-are-not-linked-to-any-user).<br/><sub>You have signed the CLA already but the status is still pending? Let us [recheck](https://cla-assistant.io/check/carverauto/serviceradar?pullRequest=2689) it.</sub>
qodo-code-review[bot] commented 2026-02-03 22:33:24 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/2689#issuecomment-3844090063
Original created: 2026-02-03T22:33:24Z

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
Memory exhaustion

Description: PendingPacketBuffer can grow without a global bound on number of sources (the
HashMap<SocketAddr, ...> is unbounded), so an attacker can send malformed/templated
NetFlow from many (possibly spoofed) source addresses to trigger buffering and cause
memory exhaustion despite the per-source packet cap.
pending_buffer.rs [11-72]

Referred Code
pub struct PendingPacketBuffer {
    buffer: HashMap<SocketAddr, VecDeque<PendingPacket>>,
    ttl: Duration,
    max_packets_per_source: usize,
}

impl PendingPacketBuffer {
    pub fn new(ttl: Duration, max_per_source: usize) -> Self {
        Self {
            buffer: HashMap::new(),
            ttl,
            max_packets_per_source: max_per_source,
        }
    }

    pub fn add(&mut self, source: SocketAddr, data: Vec<u8>, receive_time_ns: u64) {
        let queue = self.buffer.entry(source).or_default();

        // Evict oldest if at capacity
        while queue.len() >= self.max_packets_per_source {
            queue.pop_front();


 ... (clipped 41 lines)
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status:
Potential sensitive logs: Debug/warn logs include full parsed packet and per-flow fields (e.g., src_addr, dst_addr,
ports) which may be considered sensitive and should be validated/redacted per the project
logging policy.

Referred Code
debug!("Parsed NetFlow packet {:?}", packet);

let flow_messages: Vec<flowpb::FlowMessage> =
    match Converter::new(packet, peer_addr, receive_time_ns).try_into() {
        Ok(messages) => messages,
        Err(e) => {
            warn!("Failed to convert NetFlow packet to protobuf: {:?}", e);
            return Ok((0, 0));
        }
    };

// Filter out degenerate flow records (0 bytes, 0 packets)
let (valid, invalid): (Vec<_>, Vec<_>) =
    flow_messages.into_iter().partition(is_valid_flow);

if !invalid.is_empty() {
    warn!(
        "Dropped {} degenerate flow record(s) from {} \
         (0 bytes, 0 packets - likely options template or metadata)",
        invalid.len(),
        peer_addr


 ... (clipped 13 lines)

Learn more about managing compliance generic rules or creating your own custom rules

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
- Requires Further Human Verification
🏷️ - Compliance label
Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/2689#issuecomment-3844090063 Original created: 2026-02-03T22:33:24Z --- ## PR Compliance Guide 🔍 <!-- https://github.com/carverauto/serviceradar/commit/0c38d9e1272ecfc08ed2545b3360ab3b290a5b4c --> Below is a summary of compliance checks for this PR:<br> <table><tbody><tr><td colspan='2'><strong>Security Compliance</strong></td></tr> <tr><td rowspan=1>⚪</td> <td><details><summary><strong>Memory exhaustion </strong></summary><br> <b>Description:</b> <code>PendingPacketBuffer</code> can grow without a global bound on number of sources (the <br><code>HashMap<SocketAddr, ...></code> is unbounded), so an attacker can send malformed/templated <br>NetFlow from many (possibly spoofed) source addresses to trigger buffering and cause <br>memory exhaustion despite the per-source packet cap.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/2689/files#diff-651ab5c3b686e5207e26c5074beee37b1d97b44c586a4ef6c69a17d36f7cf37dR11-R72'>pending_buffer.rs [11-72]</a></strong><br> <details open><summary>Referred Code</summary> ```rust pub struct PendingPacketBuffer { buffer: HashMap<SocketAddr, VecDeque<PendingPacket>>, ttl: Duration, max_packets_per_source: usize, } impl PendingPacketBuffer { pub fn new(ttl: Duration, max_per_source: usize) -> Self { Self { buffer: HashMap::new(), ttl, max_packets_per_source: max_per_source, } } pub fn add(&mut self, source: SocketAddr, data: Vec<u8>, receive_time_ns: u64) { let queue = self.buffer.entry(source).or_default(); // Evict oldest if at capacity while queue.len() >= self.max_packets_per_source { queue.pop_front(); ... (clipped 41 lines) ``` </details></details></td></tr> <tr><td colspan='2'><strong>Ticket Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary>🎫 <strong>No ticket provided </strong></summary> - [ ] Create ticket/issue <!-- /create_ticket --create_ticket=true --> </details></td></tr> <tr><td colspan='2'><strong>Codebase Duplication Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary><strong>Codebase context is not defined </strong></summary> Follow the <a href='https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/'>guide</a> to enable codebase context checks. </details></td></tr> <tr><td colspan='2'><strong>Custom Compliance</strong></td></tr> <tr><td rowspan=5>🟢</td><td> <details><summary><strong>Generic: Comprehensive Audit Trails</strong></summary><br> **Objective:** To create a detailed and reliable record of critical system actions for security analysis <br>and compliance.<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td> <details><summary><strong>Generic: Meaningful Naming and Self-Documenting Code</strong></summary><br> **Objective:** Ensure all identifiers clearly express their purpose and intent, making code <br>self-documenting<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td> <details><summary><strong>Generic: Robust Error Handling and Edge Case Management</strong></summary><br> **Objective:** Ensure comprehensive error handling that provides meaningful context and graceful <br>degradation<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td> <details><summary><strong>Generic: Secure Error Handling</strong></summary><br> **Objective:** To prevent the leakage of sensitive system information through error messages while <br>providing sufficient detail for internal debugging.<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td> <details><summary><strong>Generic: Security-First Input Validation and Data Handling</strong></summary><br> **Objective:** Ensure all data inputs are validated, sanitized, and handled securely to prevent <br>vulnerabilities<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td rowspan=1>⚪</td> <td><details> <summary><strong>Generic: Secure Logging Practices</strong></summary><br> **Objective:** To ensure logs are useful for debugging and auditing without exposing sensitive <br>information like PII, PHI, or cardholder data.<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/2689/files#diff-3ec4d0175a1f407a9c9246268da3491d1fdfacd2e8c00d536fde36357e993999R146-R179'><strong>Potential sensitive logs</strong></a>: Debug/warn logs include full parsed packet and per-flow fields (e.g., <code>src_addr</code>, <code>dst_addr</code>, <br>ports) which may be considered sensitive and should be validated/redacted per the project <br>logging policy.<br> <details open><summary>Referred Code</summary> ```rust debug!("Parsed NetFlow packet {:?}", packet); let flow_messages: Vec<flowpb::FlowMessage> = match Converter::new(packet, peer_addr, receive_time_ns).try_into() { Ok(messages) => messages, Err(e) => { warn!("Failed to convert NetFlow packet to protobuf: {:?}", e); return Ok((0, 0)); } }; // Filter out degenerate flow records (0 bytes, 0 packets) let (valid, invalid): (Vec<_>, Vec<_>) = flow_messages.into_iter().partition(is_valid_flow); if !invalid.is_empty() { warn!( "Dropped {} degenerate flow record(s) from {} \ (0 bytes, 0 packets - likely options template or metadata)", invalid.len(), peer_addr ... (clipped 13 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td align="center" colspan="2"> - [ ] Update <!-- /compliance --update_compliance=true --> </td></tr></tbody></table> <details><summary>Compliance status legend</summary> 🟢 - Fully Compliant<br> 🟡 - Partial Compliant<br> 🔴 - Not Compliant<br> ⚪ - Requires Further Human Verification<br> 🏷️ - Compliance label<br> </details>
qodo-code-review[bot] commented 2026-02-03 22:34:46 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/2689#issuecomment-3844098743
Original created: 2026-02-03T22:34:46Z

PR Code Suggestions

Latest suggestions up to d3a5055

CategorySuggestion                                                                                                                                    Impact
Incremental [*]
Prevent duplicate flow replays

Modify the logic to buffer a raw datagram for retry only if none of its
contained packets could be processed, preventing potential data duplication.

rust/netflow-collector/src/listener.rs [232-256]

 let mut had_errors = false;
+let mut processed_any = false;
+
 for packet_result in packets {
     match packet_result {
         Ok(packet) => {
             self.process_parsed_packet(packet, peer_addr, receive_time_ns)?;
+            processed_any = true;
         }
         Err(e) => {
             warn!("Failed to parse NetFlow packet from {}: {:?}", peer_addr, e);
             had_errors = true;
         }
     }
 }
 
-// If there were parse errors, buffer the raw packet for later retry
-if had_errors {
-    let mut pending = self.pending_buffer
+// Buffer only if we couldn't process anything from this datagram.
+// This prevents duplicate flow emission on later retries.
+if had_errors && !processed_any {
+    let mut pending = self
+        .pending_buffer
         .lock()
         .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))?;
     pending.add(peer_addr, data.to_vec(), receive_time_ns);
     info!(
         "Buffered pending packet from {} ({} bytes)",
         peer_addr,
         data.len()
     );
 }
  • Apply / Chat
Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies a bug where successfully processed flows could be duplicated if other flows in the same datagram failed to parse, and provides a correct fix.

High
Avoid repeated mutex locking

Refactor the retry_pending_packets function to reduce repeated locking of the
pending_buffer mutex within the loop, improving performance and reducing
contention.

rust/netflow-collector/src/listener.rs [272-336]

-let pending_packets = self
-    .pending_buffer
-    .lock()
-    .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))?
-    .take_all(&peer_addr);
+let (pending_packets, ttl) = {
+    let mut pending = self
+        .pending_buffer
+        .lock()
+        .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))?;
+    let packets = pending.take_all(&peer_addr);
+    let ttl = pending.ttl;
+    (packets, ttl)
+};
+
 let count = pending_packets.len();
 if count == 0 {
     return Ok(());
 }
 
 info!("Retrying {} pending packet(s) for {}", count, peer_addr);
 
 let mut recovered = 0usize;
 let mut still_pending = 0usize;
+let mut to_readd = Vec::new();
 
 for pkt in pending_packets {
-    // Check if this packet has expired
-    let expired = self
-        .pending_buffer
-        .lock()
-        .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))?
-        .is_expired(&pkt);
-    if expired {
+    if pkt.received_at.elapsed() >= ttl {
         debug!(
             "Dropping expired pending packet from {} ({} bytes)",
             peer_addr,
             pkt.data.len()
         );
         continue;
     }
+
     ...
     if had_errors {
-        self.pending_buffer
-            .lock()
-            .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))?
-            .re_add(peer_addr, pkt);
+        to_readd.push(pkt);
         still_pending += 1;
     }
 }
 
+if !to_readd.is_empty() {
+    let mut pending = self
+        .pending_buffer
+        .lock()
+        .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))?;
+    for pkt in to_readd {
+        pending.re_add(peer_addr, pkt);
+    }
+}
+

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies a performance issue with repeated mutex locking inside a loop and proposes a more efficient pattern that reduces lock contention.

Medium
Remove racy pre-check locking

Remove the redundant has_pending check before calling retry_pending_packets to
avoid an unnecessary lock acquisition and a potential race condition.

rust/netflow-collector/src/listener.rs [259-266]

-let has_pending = self
-    .pending_buffer
-    .lock()
-    .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))?
-    .has_pending(&peer_addr);
-if templates_were_learned && has_pending {
+if templates_were_learned {
     self.retry_pending_packets(peer_addr)?;
 }
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why: The suggestion correctly points out a redundant lock and a potential race condition, proposing a simplification that improves both performance and correctness.

Low
Possible issue
Retry based on per-source success

Replace the global templates_learned atomic flag with a per-source success check
(!had_errors && has_pending) to trigger retries, avoiding race conditions and
incorrect behavior with concurrent sources.

rust/netflow-collector/src/listener.rs [229-266]

-// Check if templates were learned during this parse (swap to false)
-let templates_were_learned = self.templates_learned.swap(false, Ordering::Relaxed);
-
 let mut had_errors = false;
 for packet_result in packets {
     match packet_result {
         Ok(packet) => {
             self.process_parsed_packet(packet, peer_addr, receive_time_ns)?;
         }
         Err(e) => {
             warn!("Failed to parse NetFlow packet from {}: {:?}", peer_addr, e);
             had_errors = true;
         }
     }
 }
 
 // If there were parse errors, buffer the raw packet for later retry
 if had_errors {
     let mut pending = self.pending_buffer
         .lock()
         .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))?;
     pending.add(peer_addr, data.to_vec(), receive_time_ns);
     info!(
         "Buffered pending packet from {} ({} bytes)",
         peer_addr,
         data.len()
     );
 }
 
-// If new templates were learned, retry any pending packets for this source
+// If parsing succeeded, retry any pending packets for this source
 let has_pending = self
     .pending_buffer
     .lock()
     .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))?
     .has_pending(&peer_addr);
-if templates_were_learned && has_pending {
+if !had_errors && has_pending {
     self.retry_pending_packets(peer_addr)?;
 }
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a race condition and flawed logic in using a global atomic flag for a per-source retry mechanism, proposing a more robust, localized trigger that improves correctness.

Medium
Prevent panics from poisoned locks

Replace .unwrap() with proper error handling for the parser mutex lock to
prevent the listener from panicking if the lock is poisoned.

rust/netflow-collector/src/listener.rs [224-227]

 let packets: Vec<_> = {
-    let mut parser = self.parser.lock().unwrap();
+    let mut parser = self
+        .parser
+        .lock()
+        .map_err(|e| anyhow::anyhow!("parser lock poisoned: {e}"))?;
     parser.iter_packets_from_source(peer_addr, data).collect()
 };

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly points out that using .unwrap() on a poisoned mutex will cause a panic, and proposes robust error handling consistent with other parts of the PR, improving application stability.

Medium
  • Update

Previous suggestions

Suggestions up to commit 0c38d9e
CategorySuggestion                                                                                                                                    Impact
General
Log retry errors instead of panicking

Add error handling for the result of self.retry_pending_packets(peer_addr) to
log failures instead of allowing a potential panic.

rust/netflow-collector/src/listener.rs [250-252]

 if templates_were_learned && self.pending_buffer.lock().unwrap().has_pending(&peer_addr) {
-    self.retry_pending_packets(peer_addr)?;
+    if let Err(e) = self.retry_pending_packets(peer_addr) {
+        error!("Failed retrying pending packets for {}: {:?}", peer_addr, e);
+    }
 }
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies that an unhandled Result from retry_pending_packets could cause a panic and proposes adding proper error handling, which is critical for service stability.

Medium
Cap pending buffer by source

To prevent out-of-memory errors, add a cap on the number of distinct sources in
the pending buffer and evict the oldest source when the limit is reached.

rust/netflow-collector/src/pending_buffer.rs [11-39]

+use indexmap::IndexMap;
+
 pub struct PendingPacketBuffer {
-    buffer: HashMap<SocketAddr, VecDeque<PendingPacket>>,
+    buffer: IndexMap<SocketAddr, VecDeque<PendingPacket>>,
     ttl: Duration,
     max_packets_per_source: usize,
+    max_sources: usize,
 }
 
 pub fn add(&mut self, source: SocketAddr, data: Vec<u8>, receive_time_ns: u64) {
+    // Evict oldest source entirely if we exceed source cap
+    if self.buffer.len() >= self.max_sources && !self.buffer.contains_key(&source) {
+        let oldest = *self.buffer.keys().next().unwrap();
+        self.buffer.shift_remove(&oldest);
+    }
     let queue = self.buffer.entry(source).or_default();
-    // Evict oldest if at capacity
     while queue.len() >= self.max_packets_per_source {
         queue.pop_front();
     }
     queue.push_back(PendingPacket {
         data,
         receive_time_ns,
         received_at: Instant::now(),
     });
 }
Suggestion importance[1-10]: 7

__

Why: The suggestion addresses a potential memory exhaustion issue by proposing a limit on the number of sources, which is a valid and important reliability improvement for the service.

Medium
Expose TTL to avoid unnecessary locking

Add a public ttl() getter to PendingPacketBuffer to avoid locking the buffer
just to read the ttl value, allowing for more efficient expiration checks.

rust/netflow-collector/src/pending_buffer.rs [54-56]

-pub fn is_expired(&self, packet: &PendingPacket) -> bool {
-    packet.received_at.elapsed() >= self.ttl
+pub fn ttl(&self) -> Duration {
+    self.ttl
 }
Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies that locking to read an immutable ttl value is inefficient and proposes a getter to allow checking expiration outside the lock, which is a significant performance improvement.

Low
Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/2689#issuecomment-3844098743 Original created: 2026-02-03T22:34:46Z --- ## PR Code Suggestions ✨ <!-- d3a5055 --> Latest suggestions up to d3a5055 <table><thead><tr><td><strong>Category</strong></td><td align=left><strong>Suggestion&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </strong></td><td align=center><strong>Impact</strong></td></tr><tbody><tr><td rowspan=3>Incremental <sup><a href='https://qodo-merge-docs.qodo.ai/core-abilities/incremental_update/'>[*]</a></sup></td> <td> <details><summary>Prevent duplicate flow replays</summary> ___ **Modify the logic to buffer a raw datagram for retry only if none of its <br>contained packets could be processed, preventing potential data duplication.** [rust/netflow-collector/src/listener.rs [232-256]](https://github.com/carverauto/serviceradar/pull/2689/files#diff-3ec4d0175a1f407a9c9246268da3491d1fdfacd2e8c00d536fde36357e993999R232-R256) ```diff let mut had_errors = false; +let mut processed_any = false; + for packet_result in packets { match packet_result { Ok(packet) => { self.process_parsed_packet(packet, peer_addr, receive_time_ns)?; + processed_any = true; } Err(e) => { warn!("Failed to parse NetFlow packet from {}: {:?}", peer_addr, e); had_errors = true; } } } -// If there were parse errors, buffer the raw packet for later retry -if had_errors { - let mut pending = self.pending_buffer +// Buffer only if we couldn't process anything from this datagram. +// This prevents duplicate flow emission on later retries. +if had_errors && !processed_any { + let mut pending = self + .pending_buffer .lock() .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))?; pending.add(peer_addr, data.to_vec(), receive_time_ns); info!( "Buffered pending packet from {} ({} bytes)", peer_addr, data.len() ); } ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=0 --> <details><summary>Suggestion importance[1-10]: 9</summary> __ Why: The suggestion correctly identifies a bug where successfully processed flows could be duplicated if other flows in the same datagram failed to parse, and provides a correct fix. </details></details></td><td align=center>High </td></tr><tr><td> <details><summary>Avoid repeated mutex locking</summary> ___ **Refactor the <code>retry_pending_packets</code> function to reduce repeated locking of the <br><code>pending_buffer</code> mutex within the loop, improving performance and reducing <br>contention.** [rust/netflow-collector/src/listener.rs [272-336]](https://github.com/carverauto/serviceradar/pull/2689/files#diff-3ec4d0175a1f407a9c9246268da3491d1fdfacd2e8c00d536fde36357e993999R272-R336) ```diff -let pending_packets = self - .pending_buffer - .lock() - .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))? - .take_all(&peer_addr); +let (pending_packets, ttl) = { + let mut pending = self + .pending_buffer + .lock() + .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))?; + let packets = pending.take_all(&peer_addr); + let ttl = pending.ttl; + (packets, ttl) +}; + let count = pending_packets.len(); if count == 0 { return Ok(()); } info!("Retrying {} pending packet(s) for {}", count, peer_addr); let mut recovered = 0usize; let mut still_pending = 0usize; +let mut to_readd = Vec::new(); for pkt in pending_packets { - // Check if this packet has expired - let expired = self - .pending_buffer - .lock() - .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))? - .is_expired(&pkt); - if expired { + if pkt.received_at.elapsed() >= ttl { debug!( "Dropping expired pending packet from {} ({} bytes)", peer_addr, pkt.data.len() ); continue; } + ... if had_errors { - self.pending_buffer - .lock() - .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))? - .re_add(peer_addr, pkt); + to_readd.push(pkt); still_pending += 1; } } +if !to_readd.is_empty() { + let mut pending = self + .pending_buffer + .lock() + .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))?; + for pkt in to_readd { + pending.re_add(peer_addr, pkt); + } +} + ``` `[To ensure code accuracy, apply this suggestion manually]` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly identifies a performance issue with repeated mutex locking inside a loop and proposes a more efficient pattern that reduces lock contention. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Remove racy pre-check locking</summary> ___ **Remove the redundant <code>has_pending</code> check before calling <code>retry_pending_packets</code> to <br>avoid an unnecessary lock acquisition and a potential race condition.** [rust/netflow-collector/src/listener.rs [259-266]](https://github.com/carverauto/serviceradar/pull/2689/files#diff-3ec4d0175a1f407a9c9246268da3491d1fdfacd2e8c00d536fde36357e993999R259-R266) ```diff -let has_pending = self - .pending_buffer - .lock() - .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))? - .has_pending(&peer_addr); -if templates_were_learned && has_pending { +if templates_were_learned { self.retry_pending_packets(peer_addr)?; } ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=2 --> <details><summary>Suggestion importance[1-10]: 6</summary> __ Why: The suggestion correctly points out a redundant lock and a potential race condition, proposing a simplification that improves both performance and correctness. </details></details></td><td align=center>Low </td></tr><tr><td rowspan=2>Possible issue</td> <td> <details><summary>Retry based on per-source success</summary> ___ **Replace the global <code>templates_learned</code> atomic flag with a per-source success check <br>(<code>!had_errors && has_pending</code>) to trigger retries, avoiding race conditions and <br>incorrect behavior with concurrent sources.** [rust/netflow-collector/src/listener.rs [229-266]](https://github.com/carverauto/serviceradar/pull/2689/files#diff-3ec4d0175a1f407a9c9246268da3491d1fdfacd2e8c00d536fde36357e993999R229-R266) ```diff -// Check if templates were learned during this parse (swap to false) -let templates_were_learned = self.templates_learned.swap(false, Ordering::Relaxed); - let mut had_errors = false; for packet_result in packets { match packet_result { Ok(packet) => { self.process_parsed_packet(packet, peer_addr, receive_time_ns)?; } Err(e) => { warn!("Failed to parse NetFlow packet from {}: {:?}", peer_addr, e); had_errors = true; } } } // If there were parse errors, buffer the raw packet for later retry if had_errors { let mut pending = self.pending_buffer .lock() .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))?; pending.add(peer_addr, data.to_vec(), receive_time_ns); info!( "Buffered pending packet from {} ({} bytes)", peer_addr, data.len() ); } -// If new templates were learned, retry any pending packets for this source +// If parsing succeeded, retry any pending packets for this source let has_pending = self .pending_buffer .lock() .map_err(|e| anyhow::anyhow!("pending buffer lock poisoned: {e}"))? .has_pending(&peer_addr); -if templates_were_learned && has_pending { +if !had_errors && has_pending { self.retry_pending_packets(peer_addr)?; } ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=3 --> <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: The suggestion correctly identifies a race condition and flawed logic in using a global atomic flag for a per-source retry mechanism, proposing a more robust, localized trigger that improves correctness. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Prevent panics from poisoned locks</summary> ___ **Replace <code>.unwrap()</code> with proper error handling for the <code>parser</code> mutex lock to <br>prevent the listener from panicking if the lock is poisoned.** [rust/netflow-collector/src/listener.rs [224-227]](https://github.com/carverauto/serviceradar/pull/2689/files#diff-3ec4d0175a1f407a9c9246268da3491d1fdfacd2e8c00d536fde36357e993999R224-R227) ```diff let packets: Vec<_> = { - let mut parser = self.parser.lock().unwrap(); + let mut parser = self + .parser + .lock() + .map_err(|e| anyhow::anyhow!("parser lock poisoned: {e}"))?; parser.iter_packets_from_source(peer_addr, data).collect() }; ``` `[To ensure code accuracy, apply this suggestion manually]` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly points out that using `.unwrap()` on a poisoned mutex will cause a panic, and proposes robust error handling consistent with other parts of the PR, improving application stability. </details></details></td><td align=center>Medium </td></tr> <tr><td align="center" colspan="2"> - [ ] Update <!-- /improve_multi --more_suggestions=true --> </td><td></td></tr></tbody></table> ___ #### Previous suggestions <details><summary>Suggestions up to commit 0c38d9e</summary> <br><table><thead><tr><td><strong>Category</strong></td><td align=left><strong>Suggestion&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </strong></td><td align=center><strong>Impact</strong></td></tr><tbody><tr><td rowspan=3>General</td> <td> <details><summary>Log retry errors instead of panicking<!-- not_implemented --></summary> ___ **Add error handling for the result of <code>self.retry_pending_packets(peer_addr)</code> to <br>log failures instead of allowing a potential panic.** [rust/netflow-collector/src/listener.rs [250-252]](https://github.com/carverauto/serviceradar/pull/2689/files#diff-3ec4d0175a1f407a9c9246268da3491d1fdfacd2e8c00d536fde36357e993999R250-R252) ```diff if templates_were_learned && self.pending_buffer.lock().unwrap().has_pending(&peer_addr) { - self.retry_pending_packets(peer_addr)?; + if let Err(e) = self.retry_pending_packets(peer_addr) { + error!("Failed retrying pending packets for {}: {:?}", peer_addr, e); + } } ``` <!-- /improve --apply_suggestion=0 --> <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: The suggestion correctly identifies that an unhandled `Result` from `retry_pending_packets` could cause a panic and proposes adding proper error handling, which is critical for service stability. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Cap pending buffer by source</summary> ___ **To prevent out-of-memory errors, add a cap on the number of distinct sources in <br>the pending buffer and evict the oldest source when the limit is reached.** [rust/netflow-collector/src/pending_buffer.rs [11-39]](https://github.com/carverauto/serviceradar/pull/2689/files#diff-651ab5c3b686e5207e26c5074beee37b1d97b44c586a4ef6c69a17d36f7cf37dR11-R39) ```diff +use indexmap::IndexMap; + pub struct PendingPacketBuffer { - buffer: HashMap<SocketAddr, VecDeque<PendingPacket>>, + buffer: IndexMap<SocketAddr, VecDeque<PendingPacket>>, ttl: Duration, max_packets_per_source: usize, + max_sources: usize, } pub fn add(&mut self, source: SocketAddr, data: Vec<u8>, receive_time_ns: u64) { + // Evict oldest source entirely if we exceed source cap + if self.buffer.len() >= self.max_sources && !self.buffer.contains_key(&source) { + let oldest = *self.buffer.keys().next().unwrap(); + self.buffer.shift_remove(&oldest); + } let queue = self.buffer.entry(source).or_default(); - // Evict oldest if at capacity while queue.len() >= self.max_packets_per_source { queue.pop_front(); } queue.push_back(PendingPacket { data, receive_time_ns, received_at: Instant::now(), }); } ``` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion addresses a potential memory exhaustion issue by proposing a limit on the number of sources, which is a valid and important reliability improvement for the service. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Expose TTL to avoid unnecessary locking</summary> ___ **Add a public <code>ttl()</code> getter to <code>PendingPacketBuffer</code> to avoid locking the buffer <br>just to read the <code>ttl</code> value, allowing for more efficient expiration checks.** [rust/netflow-collector/src/pending_buffer.rs [54-56]](https://github.com/carverauto/serviceradar/pull/2689/files#diff-651ab5c3b686e5207e26c5074beee37b1d97b44c586a4ef6c69a17d36f7cf37dR54-R56) ```diff -pub fn is_expired(&self, packet: &PendingPacket) -> bool { - packet.received_at.elapsed() >= self.ttl +pub fn ttl(&self) -> Duration { + self.ttl } ``` <!-- /improve --apply_suggestion=2 --> <details><summary>Suggestion importance[1-10]: 6</summary> __ Why: The suggestion correctly identifies that locking to read an immutable `ttl` value is inefficient and proposes a getter to allow checking expiration outside the lock, which is a significant performance improvement. </details></details></td><td align=center>Low </td></tr> <tr><td align="center" colspan="2"> <!-- /improve_multi --more_suggestions=true --> </td><td></td></tr></tbody></table> </details>
mfreeman451 commented 2026-02-03 23:12:08 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR review comment.

Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/pull/2689#discussion_r2761440568
Original created: 2026-02-03T23:12:08Z
Original path: rust/netflow-collector/src/publisher.rs
Original line: 238

we dont need the nats_creds_file here anymore?

Imported GitHub PR review comment. Original author: @mfreeman451 Original URL: https://github.com/carverauto/serviceradar/pull/2689#discussion_r2761440568 Original created: 2026-02-03T23:12:08Z Original path: rust/netflow-collector/src/publisher.rs Original line: 238 --- we dont need the nats_creds_file here anymore?
mfreeman451 commented 2026-02-05 06:54:50 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/pull/2689#issuecomment-3851408137
Original created: 2026-02-05T06:54:50Z

closing, stale

Imported GitHub PR comment. Original author: @mfreeman451 Original URL: https://github.com/carverauto/serviceradar/pull/2689#issuecomment-3851408137 Original created: 2026-02-05T06:54:50Z --- closing, stale

Pull request closed

Sign in to join this conversation.
No reviewers
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
carverauto/serviceradar!2843
No description provided.