2248 feat network sweeper UI #2652

Merged
mfreeman451 merged 28 commits from refs/pull/2652/head into staging 2026-01-12 05:16:38 +00:00
mfreeman451 commented 2026-01-12 03:05:50 +00:00 (Migrated from github.com)
Owner

Imported from GitHub pull request.

Original GitHub pull request: #2263
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/pull/2263
Original created: 2026-01-12T03:05:50Z
Original updated: 2026-01-12T05:17:55Z
Original head: carverauto/serviceradar:2248-feat-network-sweeper-ui
Original base: staging
Original merged: 2026-01-12T05:16:38Z by @mfreeman451

User description

IMPORTANT: Please sign the Developer Certificate of Origin

Thank you for your contribution to ServiceRadar. Please note, when contributing, the developer must include
a DCO sign-off statement indicating the DCO acceptance in one commit message. Here
is an example DCO Signed-off-by line in a commit message:

Signed-off-by: J. Doe <j.doe@domain.com>

Describe your changes

Code checklist before requesting a review

  • I have signed the DCO?
  • The build completes without errors?
  • All tests are passing when running make test?

PR Type

Enhancement, Tests


Description

  • Network Sweeper UI Implementation: Added comprehensive LiveView interface for managing network sweep configuration with three main tabs (Sweep Groups, Scanner Profiles, Active Scans), dynamic criteria builder supporting 20+ device attributes, and real-time monitoring of active scans with progress tracking and performance metrics

  • Sweep Jobs Domain: Created complete sweep job management system including SweepGroup, SweepProfile, SweepGroupExecution, and SweepHostResult Ash resources with scheduling, device targeting via DSL, and execution lifecycle tracking

  • Device Targeting DSL: Implemented flexible criteria operators (eq, neq, in, contains, in_cidr, has_any, etc.) for device filtering with CIDR/IP range matching and tag support

  • Sweep Results Ingestion: Added batch processing pipeline for ingesting sweep results, updating device inventory, creating unknown devices, and broadcasting real-time progress events

  • Agent Configuration Management: Created pluggable configuration compilation system with ConfigTemplate, ConfigInstance, and ConfigVersion resources, ETS-based caching, and NATS-based invalidation events

  • Sweep Configuration Compiler: Implemented compiler transforming SweepGroup and SweepProfile into agent-consumable format with deterministic SHA256 hashing and device criteria resolution

  • Monitoring and Maintenance: Added SweepMonitorWorker for detecting missed executions, SweepDataCleanupWorker for periodic data cleanup, and observability rule seeding for sweep event monitoring

  • Device Enhancements: Added bulk device selection UI with checkboxes, bulk tag editing modal, quick filter buttons, and sweep status section on device detail pages

  • Result Buffering: Implemented in-memory buffer in agent gateway for result payloads during core outages with configurable size and flush intervals

  • Database Schema: Created tables for sweep profiles, groups, executions, host results, and agent configuration versioning with appropriate indexes and foreign key relationships

  • Comprehensive Testing: Added unit tests for target criteria DSL, integration tests for sweep config distribution and results ingestion, and LiveView tests for UI components


Diagram Walkthrough

flowchart LR
  A["SweepGroup<br/>Resource"] -->|"compile"| B["SweepCompiler"]
  C["SweepProfile<br/>Resource"] -->|"compile"| B
  B -->|"generate config"| D["ConfigInstance"]
  D -->|"cache"| E["ConfigCache"]
  E -->|"publish to agents"| F["Agent Gateway"]
  G["Sweep Results<br/>from Agent"] -->|"ingest"| H["SweepResultsIngestor"]
  H -->|"update"| I["Device Inventory"]
  H -->|"store"| J["SweepHostResult"]
  H -->|"broadcast"| K["PubSub Events"]
  K -->|"display"| L["Sweeper UI<br/>LiveView"]
  M["SweepMonitorWorker"] -->|"detect missed"| N["Internal Logs"]
  O["SweepDataCleanupWorker"] -->|"cleanup old data"| J

File Walkthrough

Relevant files
Enhancement
35 files
index.ex
Network Sweeper UI LiveView with Criteria Builder               

web-ng/lib/serviceradar_web_ng_web/live/settings/networks_live/index.ex

  • New LiveView module for managing network sweep configuration with
    three main tabs: Sweep Groups, Scanner Profiles, and Active Scans
  • Implements comprehensive UI for creating/editing sweep groups with
    dynamic criteria builder supporting 20+ device attributes and multiple
    operators
  • Real-time monitoring of active scans with progress tracking, batch
    processing, and scanner performance metrics (packets, retries, drop
    rates)
  • Handles sweep execution lifecycle events (started, progress,
    completed, failed) with PubSub subscriptions for real-time updates
+2375/-0
sweep_results_ingestor.ex
Sweep Results Ingestion and Device Inventory Update           

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex

  • New module for ingesting sweep results and updating device inventory
    with batch processing (500 records per batch)
  • Performs bulk device lookup, creates unknown devices, and updates
    availability status based on ICMP/TCP scan results
  • Stores SweepHostResult records and broadcasts real-time progress
    events after each batch
  • Handles scanner metrics aggregation and execution lifecycle management
    with proper error handling
+613/-0 
sweep_compiler.ex
Sweep Configuration Compiler for Agent Deployment               

elixir/serviceradar_core/lib/serviceradar/agent_config/compilers/sweep_compiler.ex

  • New compiler module transforming SweepGroup and SweepProfile Ash
    resources into agent-consumable configuration format
  • Compiles sweep schedules, targets (from static targets and device
    criteria), ports, modes, and settings with deterministic SHA256
    hashing
  • Resolves device criteria to actual target IPs/CIDRs and merges profile
    settings with group-level overrides
  • Implements validation and config hash computation for change detection
+294/-0 
index.ex
Bulk device selection and tag editing UI implementation   

web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex

  • Added bulk device selection UI with checkboxes for individual and
    select-all functionality
  • Implemented bulk tag editing modal with form for applying tags to
    multiple devices
  • Added quick filter buttons for device availability and discovery
    sources
  • Implemented device selection state management using MapSet with
    support for selecting all matching devices
  • Added helper functions for tag parsing, device filtering, and SRQL
    query integration
+451/-3 
target_criteria.ex
Device targeting DSL with flexible criteria operators       

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/target_criteria.ex

  • Created comprehensive device targeting DSL module supporting 24+
    operators (eq, neq, in, contains, in_cidr, has_any, etc.)
  • Implemented device filtering and IP extraction based on criteria
    matching
  • Added CIDR and IP range matching with IPv4 support using bitwise
    operations
  • Included validation and Ash filter conversion for database queries
  • Provided tag matching with key-value pair support
+496/-0 
sweep_group.ex
Sweep group resource with scheduling and targeting             

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_group.ex

  • Created SweepGroup Ash resource for user-configured network sweep
    groups
  • Implemented interval and cron-based scheduling with configurable
    partitions and agents
  • Added device targeting via target_criteria DSL and static_targets
    lists
  • Included actions for managing targets, enabling/disabling groups, and
    recording executions
  • Defined policies for role-based access control (super_admin, admin,
    operator)
+380/-0 
show.ex
Device detail page sweep status section                                   

web-ng/lib/serviceradar_web_ng_web/live/device_live/show.ex

  • Added sweep status section component displaying latest sweep results
    for a device
  • Implemented sweep results loading by device IP with Ash query
    integration
  • Added sweep history table showing recent sweep executions with status
    and response times
  • Included helper functions for formatting sweep data and building
    actor/tenant context
  • Displays open ports, response times, and status indicators from sweep
    results
+209/-0 
sweep_group_execution.ex
Sweep execution tracking resource                                               

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_group_execution.ex

  • Created SweepGroupExecution Ash resource for tracking sweep execution
    lifecycle
  • Implemented status transitions (pending → running → completed/failed)
    with timing tracking
  • Added actions for starting, completing, and failing executions with
    automatic duration calculation
  • Included scanner metrics storage and success rate calculation
  • Defined read actions for querying executions by group, status, or
    recency
+288/-0 
sweep.ex
Sweep event processor with inventory updates                         

elixir/serviceradar_core/lib/serviceradar/event_writer/processors/sweep.ex

  • Extended sweep event processor to update device inventory via
    SweepResultsIngestor
  • Added device availability status updates based on sweep results
  • Implemented device discovery and creation for unknown hosts found
    during sweeps
  • Added support for execution_id, sweep_group_id, and config_hash in
    sweep messages
  • Included fallback for availability-only updates when execution context
    is unavailable
+137/-1 
sweep_monitor_worker.ex
Sweep execution monitoring worker                                               

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_monitor_worker.ex

  • Created Oban worker for monitoring missed sweep executions across all
    tenants
  • Implemented interval parsing (15m, 1h, 1d) with grace period for
    expected execution time
  • Added internal log publishing to logs.internal.sweep for missed sweeps
  • Included detailed payload with sweep group context and overdue
    duration
  • Supports configurable grace period (default 5 minutes) for clock drift
    tolerance
+213/-0 
sweep_host_result.ex
Add SweepHostResult resource for sweep execution tracking

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_host_result.ex

  • New Ash resource for tracking per-host results from network sweep
    executions
  • Stores IP, hostname, status, response time, open ports, and error
    details
  • Includes custom indexes for efficient querying by execution, IP, and
    status
  • Provides read actions for filtering by execution and availability
    status
+237/-0 
config_cache.ex
Add ETS-based configuration cache with TTL and cleanup     

elixir/serviceradar_core/lib/serviceradar/agent_config/config_cache.ex

  • New GenServer-based ETS cache for compiled agent configurations
  • Implements TTL-based expiration with periodic cleanup of expired
    entries
  • Provides hash-based change detection for efficient config updates
  • Supports cache invalidation by tenant, config type, or specific key
+199/-0 
config_instance.ex
Add ConfigInstance resource for agent configuration storage

elixir/serviceradar_core/lib/serviceradar/agent_config/config_instance.ex

  • New Ash resource for storing compiled configuration instances for
    agents
  • Tracks version, content hash, and delivery metadata
  • Supports partition-based and agent-specific configurations
  • Includes relationships to templates and version history
+222/-0 
config_server.ex
Add ConfigServer for config compilation orchestration       

elixir/serviceradar_core/lib/serviceradar/agent_config/config_server.ex

  • New GenServer orchestrating config compilation and caching
  • Provides API for getting configs with automatic caching and hash-based
    change detection
  • Supports cache invalidation when source resources change
  • Falls back to database loading when no compiler is registered
+187/-0 
sweep_profile.ex
Add SweepProfile resource for reusable scanner configurations

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_profile.ex

  • New Ash resource for admin-managed scanner profiles
  • Defines reusable scan configurations (ports, modes, timeouts,
    concurrency)
  • Supports ICMP and TCP-specific settings
  • Can be restricted to admin-only usage
+215/-0 
sweep_data_cleanup_worker.ex
Add worker for cleanup of old sweep execution data             

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_data_cleanup_worker.ex

  • New Oban worker for periodic cleanup of old sweep data
  • Deletes SweepHostResult records older than 7 days (configurable)
  • Deletes completed/failed SweepGroupExecution records older than 30
    days (configurable)
  • Processes data in batches to avoid memory issues
+174/-0 
sweep_pubsub.ex
Add PubSub broadcaster for sweep execution updates             

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_pubsub.ex

  • New module for broadcasting sweep execution updates via Phoenix.PubSub
  • Provides topic builders for tenant-level and execution-specific
    subscriptions
  • Broadcasts events for execution start, progress, completion, and
    failure
  • Includes safe broadcast handling when PubSub is unavailable
+166/-0 
rule_seeder.ex
Add seeder for default observability rules                             

elixir/serviceradar_core/lib/serviceradar/observability/rule_seeder.ex

  • New GenServer that seeds default observability rules for each tenant
  • Creates default LogPromotionRule for missed sweep detection
  • Creates default StatefulAlertRule for repeated missed sweeps
  • Ensures rules exist without duplicating on subsequent runs
+162/-0 
status_processor.ex
Add result buffering and telemetry metrics to status processor

elixir/serviceradar_agent_gateway/lib/serviceradar_agent_gateway/status_processor.ex

  • Add forward/2 function with configurable buffering and metrics
    emission
  • Implement buffering of results when core is unavailable
  • Add telemetry metrics for result forwarding with duration and status
    tracking
  • Remove retry queue logic in favor of explicit buffering
+63/-11 
config_template.ex
Add ConfigTemplate resource for reusable config schemas   

elixir/serviceradar_core/lib/serviceradar/agent_config/config_template.ex

  • New Ash resource for reusable configuration templates
  • Supports tenant-specific and admin-only templates
  • Stores JSON schema and default values for config validation
  • Includes relationships to config instances
+163/-0 
config_version.ex
Add ConfigVersion resource for configuration audit history

elixir/serviceradar_core/lib/serviceradar/agent_config/config_version.ex

  • New Ash resource for tracking configuration version history
  • Stores compiled config, hash, and source IDs for each version
  • Captures actor information and change reasons for audit trail
  • Supports rollback and historical comparison
+158/-0 
config_publisher.ex
Add publisher for config invalidation events to NATS         

elixir/serviceradar_core/lib/serviceradar/agent_config/config_publisher.ex

  • New module for publishing config invalidation events to NATS
  • Invalidates local cache and publishes cluster-wide events
  • Supports resource-specific change tracking with source information
  • Handles NATS connection failures gracefully
+119/-0 
status_buffer.ex
Add in-memory buffer for result payloads during outages   

elixir/serviceradar_agent_gateway/lib/serviceradar_agent_gateway/status_buffer.ex

  • New GenServer for buffering result payloads during core outages
  • Implements bounded in-memory queue with configurable size and flush
    interval
  • Periodically flushes buffered results back to core
  • Drops oldest entries when buffer reaches capacity
+117/-0 
agent_config_generator.ex
Integrate sweep configuration into agent payload generation

elixir/serviceradar_core/lib/serviceradar/edge/agent_config_generator.ex

  • Add load_sweep_config/2 function to load compiled sweep configurations
  • Integrate sweep config into agent payload via ConfigServer
  • Merge sweep config with existing sync and check configurations
  • Handle missing sweep configs gracefully with empty defaults
+35/-5   
compiler.ex
Add Compiler behaviour for pluggable config compilation   

elixir/serviceradar_core/lib/serviceradar/agent_config/compiler.ex

  • New behaviour module defining the compiler interface for config types
  • Specifies callbacks for compile/4, config_type/0, and
    source_resources/0
  • Includes registry of available compilers (currently sweep)
  • Provides content_hash/1 utility for SHA256 hashing of configs
+102/-0 
template_seeder.ex
Add sweep monitoring templates and alert rules                     

elixir/serviceradar_core/lib/serviceradar/observability/template_seeder.ex

  • Add passthrough log template for internal sweep logs
  • Add promote_missed_sweeps log promotion rule for sweep events
  • Add repeated_missed_sweeps stateful alert rule for repeated missed
    executions
  • Configure alert severity and grouping for sweep monitoring
+49/-0   
create_version_history.ex
Add change for creating config version history                     

elixir/serviceradar_core/lib/serviceradar/agent_config/changes/create_version_history.ex

  • New Ash change that creates version history records on config updates
  • Captures previous config state, hash, and source IDs before update
  • Extracts actor information for audit trail
  • Handles creation failures gracefully without blocking main operation
+72/-0   
monitoring.pb.ex
Extend sweep completion status with execution tracking and scanner
stats

elixir/serviceradar_core/lib/serviceradar/proto/monitoring.pb.ex

  • Add execution_id and sweep_group_id fields to SweepCompletionStatus
  • Add new SweepScannerStats message with packet, ring, retry, and port
    metrics
  • Include scanner_stats field in SweepCompletionStatus for detailed
    statistics
  • Add rate limit and drop rate tracking for network monitoring
+22/-0   
config_invalidation_notifier.ex
Add notifier for config resource change invalidation         

elixir/serviceradar_core/lib/serviceradar/agent_config/config_invalidation_notifier.ex

  • New Ash notifier for triggering cache invalidation on resource changes
  • Monitors ConfigInstance and ConfigTemplate resource changes
  • Publishes invalidation events via ConfigPublisher for cluster-wide
    propagation
  • Logs invalidation events for debugging
+70/-0   
application.ex
Add config and rule seeding infrastructure to application

elixir/serviceradar_core/lib/serviceradar/application.ex

  • Add RuleSeeder child process for seeding default observability rules
  • Add ConfigCache GenServer for ETS-based config caching
  • Add ConfigServer GenServer for config compilation orchestration
  • Add maintenance and monitoring queues to Oban configuration
+15/-0   
sweep_jobs.ex
Add SweepJobs domain for sweep configuration management   

elixir/serviceradar_core/lib/serviceradar/sweep_jobs.ex

  • New Ash domain for network sweep job management
  • Registers SweepProfile, SweepGroup, SweepGroupExecution, and
    SweepHostResult resources
  • Provides documentation on sweep groups and device targeting DSL
  • Enables AshAdmin for domain management
+50/-0   
sync_ingestor.ex
Add tags field support to device sync ingestion                   

elixir/serviceradar_core/lib/serviceradar/inventory/sync_ingestor.ex

  • Add tags field extraction from sync updates
  • Initialize tags as empty map in device creation
  • Support tags in both update and default device structures
+3/-0     
agent_config.ex
Add AgentConfig domain for configuration management           

elixir/serviceradar_core/lib/serviceradar/agent_config.ex

  • New Ash domain for agent configuration management
  • Registers ConfigTemplate, ConfigInstance, and ConfigVersion resources
  • Provides pluggable compiler pattern for config types
  • Enables AshAdmin for domain management
+40/-0   
device.ex
Add tags attribute to Device resource                                       

elixir/serviceradar_core/lib/serviceradar/inventory/device.ex

  • Add tags attribute as a map for user-defined labels
  • Include tags in create and update actions
  • Default to empty map for new devices
+8/-0     
query_builder_components.ex
Add query builder UI components                                                   

web-ng/lib/serviceradar_web_ng_web/components/query_builder_components.ex

  • New component module for query builder UI elements
  • Implements query_builder_pill/1 component for displaying query
    criteria
  • Provides visual styling with icons and connection lines for nested
    criteria
+24/-0   
Configuration changes
5 files
zen_rule_seeder.ex
Add Passthrough Zen Rule for Sweep Logs                                   

elixir/serviceradar_core/lib/serviceradar/observability/zen_rule_seeder.ex

  • Added new passthrough Zen rule for internal sweep logs with subject
    logs.internal.sweep
  • Configured with default agent, enabled state, and order priority of
    100
+11/-0   
20260111200000_add_sweep_jobs_tables.exs
Database schema for sweep jobs domain                                       

elixir/serviceradar_core/priv/repo/tenant_migrations/20260111200000_add_sweep_jobs_tables.exs

  • Created four new tables: sweep_profiles, sweep_groups,
    sweep_group_executions, sweep_host_results
  • Added indexes for tenant partitioning, partition/agent lookups, and
    execution tracking
  • Defined foreign key relationships between groups, executions, and
    results
  • Included fields for schedule configuration, device targeting, and scan
    settings
+254/-0 
20260111194010_add_agent_config_tables.exs
Agent configuration management database schema                     

elixir/serviceradar_core/priv/repo/tenant_migrations/20260111194010_add_agent_config_tables.exs

  • Created agent configuration versioning tables: agent_config_versions,
    agent_config_templates, agent_config_instances
  • Added indexes for tenant/type/partition/agent lookups and version
    tracking
  • Implemented foreign key relationships for template and instance
    references
  • Included fields for config compilation, hashing, and delivery tracking
+185/-0 
config.exs
Configure Ash domains and Oban workers for sweep jobs       

elixir/serviceradar_core/config/config.exs

  • Add ServiceRadar.AgentConfig and ServiceRadar.SweepJobs to Ash domains
  • Add maintenance and monitoring queues to Oban configuration
  • Add SweepDataCleanupWorker cron job (daily at 3 AM)
  • Add SweepMonitorWorker cron job (every 5 minutes)
+9/-3     
20260111210000_add_sweep_cleanup_indexes.exs
Add database indexes for sweep data cleanup                           

elixir/serviceradar_core/priv/repo/tenant_migrations/20260111210000_add_sweep_cleanup_indexes.exs

  • Add index on sweep_host_results.inserted_at for cleanup queries
  • Add filtered index on sweep_group_executions.started_at for
    completed/failed executions
  • Enables efficient batch deletion of old sweep data
+38/-0   
Tests
5 files
sweep_compiler_test.exs
Sweep compiler and target criteria test suite                       

elixir/serviceradar_core/test/serviceradar/sweep_jobs/sweep_compiler_test.exs

  • Added comprehensive test suite for sweep config validation and
    consistency
  • Tests for TargetCriteria DSL operators (eq, neq, contains, in_cidr,
    has_any, etc.)
  • Validates config structure for agent consumption with required fields
    and types
  • Tests deterministic config hashing and device filtering logic
  • Includes integration tests for sweep group structure and target
    validation
+363/-0 
sweep_results_flow_e2e_test.exs
Add E2E test for sweep results ingestion and device updates

elixir/serviceradar_core/test/serviceradar/sweep_jobs/sweep_results_flow_e2e_test.exs

  • New end-to-end integration test for sweep results ingestion flow
  • Tests device creation and updates from sweep results
  • Verifies execution statistics tracking and host result storage
  • Validates discovery source management and availability status updates
+163/-0 
target_criteria_test.exs
Add unit tests for device targeting criteria DSL                 

elixir/serviceradar_core/test/serviceradar/sweep_jobs/target_criteria_test.exs

  • New unit tests for TargetCriteria module covering tag operators
  • Tests range and numeric comparison operators for device targeting
  • Validates null checks and criteria validation logic
  • Tests Ash filter conversion for database queries
+92/-0   
sweep_config_distribution_integration_test.exs
Add integration test for sweep config distribution             

elixir/serviceradar_core/test/serviceradar/edge/sweep_config_distribution_integration_test.exs

  • New integration test for sweep configuration distribution to agents
  • Tests config compilation from SweepGroup and SweepProfile resources
  • Verifies target device inclusion and profile settings in compiled
    config
  • Validates agent payload generation with sweep configuration
+103/-0 
networks_live_test.exs
Add tests for network sweep configuration UI                         

web-ng/test/serviceradar_web_ng_web/live/settings/networks_live_test.exs

  • New LiveView tests for network sweep configuration UI
  • Tests listing sweep groups and profiles in settings
  • Tests tab switching between groups and profiles
  • Tests form rendering for creating new groups and profiles
+72/-0   
Formatting
2 files
srql_components.ex
Component naming refactor for query builder                           

web-ng/lib/serviceradar_web_ng_web/components/srql_components.ex

  • Renamed srql_builder_pill component calls to query_builder_pill
    throughout the file
  • Updated 14 component references to use the new naming convention
  • Added import for QueryBuilderComponents module
+17/-33 
show.ex
Reformat agent show view task definitions                               

web-ng/lib/serviceradar_web_ng_web/live/agent_live/show.ex

  • Reformat Task.async calls for improved readability
  • Split long lines into multiple lines with proper indentation
  • No functional changes, purely formatting improvements
+18/-6   
Additional files
63 files
MODULE.bazel +1/-10   
server.rs +2/-0     
grpc_server.rs +2/-0     
main.rs +2/-0     
network-sweeps.md +102/-0 
troubleshooting-guide.md +8/-0     
web-ui.md +17/-0   
test.exs +3/-1     
compute_config_hash.ex +28/-0   
increment_version.ex +25/-0   
20260111220000_add_scanner_metrics_to_executions.exs +26/-0   
20260111231229_add_device_tags.exs +19/-0   
20260111194010.json +301/-0 
20260111194010.json +174/-0 
20260111194010.json +228/-0 
20260111231229.json +533/-0 
20260111231229.json +290/-0 
20260111231229.json +349/-0 
20260111231229.json +267/-0 
20260111231229.json +212/-0 
20260111194010.json +246/-0 
design.md +75/-0   
proposal.md +20/-0   
spec.md +30/-0   
tasks.md +34/-0   
design.md +377/-0 
proposal.md +61/-0   
spec.md +128/-0 
spec.md +39/-0   
spec.md +263/-0 
spec.md +102/-0 
tasks.md +174/-0 
BUILD.bazel +2/-0     
push_loop.go +45/-2   
server.go +134/-485
server_test.go +42/-490
sweep_config_gateway.go +197/-0 
sweep_performance_test.go +7/-47   
sweep_service.go +94/-34 
types.go +13/-12 
metrics.go +3/-1     
sweep.go +36/-1   
interfaces.go +6/-0     
BUILD.bazel +0/-1     
aggregation_test.go +9/-12   
availability_logic_test.go +5/-8     
interfaces.go +2/-0     
sweeper.go +123/-557
sweeper_prune_test.go +2/-2     
sweeper_test.go +2/-1062
tcp_optimization_test.go +6/-29   
types.go +0/-77   
monitoring.pb.go +270/-69
monitoring.proto +33/-0   
devices.rs +71/-0   
config.exs +6/-2     
serviceradar_web_ng_web.ex +1/-0     
settings_components.ex +5/-0     
ui_components.ex +8/-2     
index.ex +1/-1     
index.ex +6/-6     
router.ex +8/-0     
catalog.ex +11/-1   

Imported from GitHub pull request. Original GitHub pull request: #2263 Original author: @mfreeman451 Original URL: https://github.com/carverauto/serviceradar/pull/2263 Original created: 2026-01-12T03:05:50Z Original updated: 2026-01-12T05:17:55Z Original head: carverauto/serviceradar:2248-feat-network-sweeper-ui Original base: staging Original merged: 2026-01-12T05:16:38Z by @mfreeman451 --- ### **User description** ## IMPORTANT: Please sign the Developer Certificate of Origin Thank you for your contribution to ServiceRadar. Please note, when contributing, the developer must include a [DCO sign-off statement]( https://developercertificate.org/) indicating the DCO acceptance in one commit message. Here is an example DCO Signed-off-by line in a commit message: ``` Signed-off-by: J. Doe <j.doe@domain.com> ``` ## Describe your changes ## Issue ticket number and link ## Code checklist before requesting a review - [ ] I have signed the DCO? - [ ] The build completes without errors? - [ ] All tests are passing when running make test? ___ ### **PR Type** Enhancement, Tests ___ ### **Description** - **Network Sweeper UI Implementation**: Added comprehensive LiveView interface for managing network sweep configuration with three main tabs (Sweep Groups, Scanner Profiles, Active Scans), dynamic criteria builder supporting 20+ device attributes, and real-time monitoring of active scans with progress tracking and performance metrics - **Sweep Jobs Domain**: Created complete sweep job management system including `SweepGroup`, `SweepProfile`, `SweepGroupExecution`, and `SweepHostResult` Ash resources with scheduling, device targeting via DSL, and execution lifecycle tracking - **Device Targeting DSL**: Implemented flexible criteria operators (eq, neq, in, contains, in_cidr, has_any, etc.) for device filtering with CIDR/IP range matching and tag support - **Sweep Results Ingestion**: Added batch processing pipeline for ingesting sweep results, updating device inventory, creating unknown devices, and broadcasting real-time progress events - **Agent Configuration Management**: Created pluggable configuration compilation system with `ConfigTemplate`, `ConfigInstance`, and `ConfigVersion` resources, ETS-based caching, and NATS-based invalidation events - **Sweep Configuration Compiler**: Implemented compiler transforming `SweepGroup` and `SweepProfile` into agent-consumable format with deterministic SHA256 hashing and device criteria resolution - **Monitoring and Maintenance**: Added `SweepMonitorWorker` for detecting missed executions, `SweepDataCleanupWorker` for periodic data cleanup, and observability rule seeding for sweep event monitoring - **Device Enhancements**: Added bulk device selection UI with checkboxes, bulk tag editing modal, quick filter buttons, and sweep status section on device detail pages - **Result Buffering**: Implemented in-memory buffer in agent gateway for result payloads during core outages with configurable size and flush intervals - **Database Schema**: Created tables for sweep profiles, groups, executions, host results, and agent configuration versioning with appropriate indexes and foreign key relationships - **Comprehensive Testing**: Added unit tests for target criteria DSL, integration tests for sweep config distribution and results ingestion, and LiveView tests for UI components ___ ### Diagram Walkthrough ```mermaid flowchart LR A["SweepGroup<br/>Resource"] -->|"compile"| B["SweepCompiler"] C["SweepProfile<br/>Resource"] -->|"compile"| B B -->|"generate config"| D["ConfigInstance"] D -->|"cache"| E["ConfigCache"] E -->|"publish to agents"| F["Agent Gateway"] G["Sweep Results<br/>from Agent"] -->|"ingest"| H["SweepResultsIngestor"] H -->|"update"| I["Device Inventory"] H -->|"store"| J["SweepHostResult"] H -->|"broadcast"| K["PubSub Events"] K -->|"display"| L["Sweeper UI<br/>LiveView"] M["SweepMonitorWorker"] -->|"detect missed"| N["Internal Logs"] O["SweepDataCleanupWorker"] -->|"cleanup old data"| J ``` <details><summary><h3>File Walkthrough</h3></summary> <table><thead><tr><th></th><th align="left">Relevant files</th></tr></thead><tbody><tr><td><strong>Enhancement</strong></td><td><details><summary>35 files</summary><table> <tr> <td> <details> <summary><strong>index.ex</strong><dd><code>Network Sweeper UI LiveView with Criteria Builder</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> web-ng/lib/serviceradar_web_ng_web/live/settings/networks_live/index.ex <ul><li>New LiveView module for managing network sweep configuration with <br>three main tabs: Sweep Groups, Scanner Profiles, and Active Scans<br> <li> Implements comprehensive UI for creating/editing sweep groups with <br>dynamic criteria builder supporting 20+ device attributes and multiple <br>operators<br> <li> Real-time monitoring of active scans with progress tracking, batch <br>processing, and scanner performance metrics (packets, retries, drop <br>rates)<br> <li> Handles sweep execution lifecycle events (started, progress, <br>completed, failed) with PubSub subscriptions for real-time updates</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-b2127e71582033bc6dfd2d7397f56bf43c1f7c613defffc504b3d8ee1e7406c4">+2375/-0</a></td> </tr> <tr> <td> <details> <summary><strong>sweep_results_ingestor.ex</strong><dd><code>Sweep Results Ingestion and Device Inventory Update</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex <ul><li>New module for ingesting sweep results and updating device inventory <br>with batch processing (500 records per batch)<br> <li> Performs bulk device lookup, creates unknown devices, and updates <br>availability status based on ICMP/TCP scan results<br> <li> Stores <code>SweepHostResult</code> records and broadcasts real-time progress <br>events after each batch<br> <li> Handles scanner metrics aggregation and execution lifecycle management <br>with proper error handling</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-06f4b3bf56e5f1d122b25040ea7f321125d6cae20606811dc0b2a0ddc7a66226">+613/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>sweep_compiler.ex</strong><dd><code>Sweep Configuration Compiler for Agent Deployment</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/agent_config/compilers/sweep_compiler.ex <ul><li>New compiler module transforming <code>SweepGroup</code> and <code>SweepProfile</code> Ash <br>resources into agent-consumable configuration format<br> <li> Compiles sweep schedules, targets (from static targets and device <br>criteria), ports, modes, and settings with deterministic SHA256 <br>hashing<br> <li> Resolves device criteria to actual target IPs/CIDRs and merges profile <br>settings with group-level overrides<br> <li> Implements validation and config hash computation for change detection</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-fd107fcbfd91022cd5377ad79bcce1796630a25c17386187f4fbf90b35f2c941">+294/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>index.ex</strong><dd><code>Bulk device selection and tag editing UI implementation</code>&nbsp; &nbsp; </dd></summary> <hr> web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex <ul><li>Added bulk device selection UI with checkboxes for individual and <br>select-all functionality<br> <li> Implemented bulk tag editing modal with form for applying tags to <br>multiple devices<br> <li> Added quick filter buttons for device availability and discovery <br>sources<br> <li> Implemented device selection state management using <code>MapSet</code> with <br>support for selecting all matching devices<br> <li> Added helper functions for tag parsing, device filtering, and SRQL <br>query integration</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-261a01f4876e5984e1d9e9b38a3540675dfb0272abc30e6bdb2a4fa610353cc7">+451/-3</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>target_criteria.ex</strong><dd><code>Device targeting DSL with flexible criteria operators</code>&nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/sweep_jobs/target_criteria.ex <ul><li>Created comprehensive device targeting DSL module supporting 24+ <br>operators (eq, neq, in, contains, in_cidr, has_any, etc.)<br> <li> Implemented device filtering and IP extraction based on criteria <br>matching<br> <li> Added CIDR and IP range matching with IPv4 support using bitwise <br>operations<br> <li> Included validation and Ash filter conversion for database queries<br> <li> Provided tag matching with key-value pair support</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-a71ead6955e751bd5d27c6d81b1723147aebfbe4c80deedc20afbaed02afc062">+496/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>sweep_group.ex</strong><dd><code>Sweep group resource with scheduling and targeting</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_group.ex <ul><li>Created <code>SweepGroup</code> Ash resource for user-configured network sweep <br>groups<br> <li> Implemented interval and cron-based scheduling with configurable <br>partitions and agents<br> <li> Added device targeting via <code>target_criteria</code> DSL and <code>static_targets</code> <br>lists<br> <li> Included actions for managing targets, enabling/disabling groups, and <br>recording executions<br> <li> Defined policies for role-based access control (super_admin, admin, <br>operator)</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-b7bfa2b8463be683eee7cf96abc97b135d225049d6507009f98a2e0ee658c728">+380/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>show.ex</strong><dd><code>Device detail page sweep status section</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> web-ng/lib/serviceradar_web_ng_web/live/device_live/show.ex <ul><li>Added sweep status section component displaying latest sweep results <br>for a device<br> <li> Implemented sweep results loading by device IP with Ash query <br>integration<br> <li> Added sweep history table showing recent sweep executions with status <br>and response times<br> <li> Included helper functions for formatting sweep data and building <br>actor/tenant context<br> <li> Displays open ports, response times, and status indicators from sweep <br>results</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-92d8e0af57d2f65dfb8562e896dbea17ef9f47074ffd14f58261decc76f80c24">+209/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>sweep_group_execution.ex</strong><dd><code>Sweep execution tracking resource</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_group_execution.ex <ul><li>Created <code>SweepGroupExecution</code> Ash resource for tracking sweep execution <br>lifecycle<br> <li> Implemented status transitions (pending → running → completed/failed) <br>with timing tracking<br> <li> Added actions for starting, completing, and failing executions with <br>automatic duration calculation<br> <li> Included scanner metrics storage and success rate calculation<br> <li> Defined read actions for querying executions by group, status, or <br>recency</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-35d4958b8252ecd025abfb7526da27788c2a1f08a5e22883bc1ca517c1432eb0">+288/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>sweep.ex</strong><dd><code>Sweep event processor with inventory updates</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/event_writer/processors/sweep.ex <ul><li>Extended sweep event processor to update device inventory via <br><code>SweepResultsIngestor</code><br> <li> Added device availability status updates based on sweep results<br> <li> Implemented device discovery and creation for unknown hosts found <br>during sweeps<br> <li> Added support for execution_id, sweep_group_id, and config_hash in <br>sweep messages<br> <li> Included fallback for availability-only updates when execution context <br>is unavailable</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-ac15b7f40862a0f6bab14cfaf84ae8776c232baa90166014e34159a81427811e">+137/-1</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>sweep_monitor_worker.ex</strong><dd><code>Sweep execution monitoring worker</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_monitor_worker.ex <ul><li>Created Oban worker for monitoring missed sweep executions across all <br>tenants<br> <li> Implemented interval parsing (15m, 1h, 1d) with grace period for <br>expected execution time<br> <li> Added internal log publishing to <code>logs.internal.sweep</code> for missed sweeps<br> <li> Included detailed payload with sweep group context and overdue <br>duration<br> <li> Supports configurable grace period (default 5 minutes) for clock drift <br>tolerance</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-c25f50cc2e496fc7f8f883a12098f278c56ae1d2d42d4d2af4200d364e2ef9a8">+213/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>sweep_host_result.ex</strong><dd><code>Add SweepHostResult resource for sweep execution tracking</code></dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_host_result.ex <ul><li>New Ash resource for tracking per-host results from network sweep <br>executions<br> <li> Stores IP, hostname, status, response time, open ports, and error <br>details<br> <li> Includes custom indexes for efficient querying by execution, IP, and <br>status<br> <li> Provides read actions for filtering by execution and availability <br>status</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-6c0c4eb2ef2eeef5d76fd9fc1859240b7e3d4b1b2cdd6e72f86208c32c0086db">+237/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>config_cache.ex</strong><dd><code>Add ETS-based configuration cache with TTL and cleanup</code>&nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/agent_config/config_cache.ex <ul><li>New GenServer-based ETS cache for compiled agent configurations<br> <li> Implements TTL-based expiration with periodic cleanup of expired <br>entries<br> <li> Provides hash-based change detection for efficient config updates<br> <li> Supports cache invalidation by tenant, config type, or specific key</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-95f0c8640267167409c8af66d33550c2440b1ac5ec810f5e4d6fcd8df6ef8e2f">+199/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>config_instance.ex</strong><dd><code>Add ConfigInstance resource for agent configuration storage</code></dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/agent_config/config_instance.ex <ul><li>New Ash resource for storing compiled configuration instances for <br>agents<br> <li> Tracks version, content hash, and delivery metadata<br> <li> Supports partition-based and agent-specific configurations<br> <li> Includes relationships to templates and version history</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-37fd443381f2242517ed060fccbc634e3f2500b06a8702393ad8e7e034caece2">+222/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>config_server.ex</strong><dd><code>Add ConfigServer for config compilation orchestration</code>&nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/agent_config/config_server.ex <ul><li>New GenServer orchestrating config compilation and caching<br> <li> Provides API for getting configs with automatic caching and hash-based <br>change detection<br> <li> Supports cache invalidation when source resources change<br> <li> Falls back to database loading when no compiler is registered</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-a4d8447584bfcd1088465714c00bea67c90100320b125857ac5bd6a9783de468">+187/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>sweep_profile.ex</strong><dd><code>Add SweepProfile resource for reusable scanner configurations</code></dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_profile.ex <ul><li>New Ash resource for admin-managed scanner profiles<br> <li> Defines reusable scan configurations (ports, modes, timeouts, <br>concurrency)<br> <li> Supports ICMP and TCP-specific settings<br> <li> Can be restricted to admin-only usage</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-4977d4fef97c317cd8f6f15b0762d96d856b4d42e5ae7bf1341e8f7ef26651e4">+215/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>sweep_data_cleanup_worker.ex</strong><dd><code>Add worker for cleanup of old sweep execution data</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_data_cleanup_worker.ex <ul><li>New Oban worker for periodic cleanup of old sweep data<br> <li> Deletes <code>SweepHostResult</code> records older than 7 days (configurable)<br> <li> Deletes completed/failed <code>SweepGroupExecution</code> records older than 30 <br>days (configurable)<br> <li> Processes data in batches to avoid memory issues</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-b8660522af1e1ad3ba8da56f754567fdae03d09fa9e18749a3a49435893073fe">+174/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>sweep_pubsub.ex</strong><dd><code>Add PubSub broadcaster for sweep execution updates</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_pubsub.ex <ul><li>New module for broadcasting sweep execution updates via Phoenix.PubSub<br> <li> Provides topic builders for tenant-level and execution-specific <br>subscriptions<br> <li> Broadcasts events for execution start, progress, completion, and <br>failure<br> <li> Includes safe broadcast handling when PubSub is unavailable</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-4c346d5c68dfcc3d45b42e115e1a0aebe410e2b1b28bda5eacec3a152fb3abfd">+166/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>rule_seeder.ex</strong><dd><code>Add seeder for default observability rules</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/observability/rule_seeder.ex <ul><li>New GenServer that seeds default observability rules for each tenant<br> <li> Creates default <code>LogPromotionRule</code> for missed sweep detection<br> <li> Creates default <code>StatefulAlertRule</code> for repeated missed sweeps<br> <li> Ensures rules exist without duplicating on subsequent runs</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-0518d428a7d0e3836a352c170a073dbc94bda8e024047f96a0e66ec4b892c166">+162/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>status_processor.ex</strong><dd><code>Add result buffering and telemetry metrics to status processor</code></dd></summary> <hr> elixir/serviceradar_agent_gateway/lib/serviceradar_agent_gateway/status_processor.ex <ul><li>Add <code>forward/2</code> function with configurable buffering and metrics <br>emission<br> <li> Implement buffering of results when core is unavailable<br> <li> Add telemetry metrics for result forwarding with duration and status <br>tracking<br> <li> Remove retry queue logic in favor of explicit buffering</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-2d04050dff3ba2cc8153559e33a892f5f421982bf6dcbda7172857b3bf398a02">+63/-11</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>config_template.ex</strong><dd><code>Add ConfigTemplate resource for reusable config schemas</code>&nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/agent_config/config_template.ex <ul><li>New Ash resource for reusable configuration templates<br> <li> Supports tenant-specific and admin-only templates<br> <li> Stores JSON schema and default values for config validation<br> <li> Includes relationships to config instances</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-1151a4c39db6cce97d7534f76218bd4ae14de0b1fff9cade8294c88ef67044a2">+163/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>config_version.ex</strong><dd><code>Add ConfigVersion resource for configuration audit history</code></dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/agent_config/config_version.ex <ul><li>New Ash resource for tracking configuration version history<br> <li> Stores compiled config, hash, and source IDs for each version<br> <li> Captures actor information and change reasons for audit trail<br> <li> Supports rollback and historical comparison</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-df079b1f5d9a8c20be9c34dcfc21f00b03bc12bc729b71471931de1e2aceff83">+158/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>config_publisher.ex</strong><dd><code>Add publisher for config invalidation events to NATS</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/agent_config/config_publisher.ex <ul><li>New module for publishing config invalidation events to NATS<br> <li> Invalidates local cache and publishes cluster-wide events<br> <li> Supports resource-specific change tracking with source information<br> <li> Handles NATS connection failures gracefully</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-40915f43b6db9aa1905de3743bad4cd08f0ccc183dc43a2dcc69ff2235eb5b8e">+119/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>status_buffer.ex</strong><dd><code>Add in-memory buffer for result payloads during outages</code>&nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_agent_gateway/lib/serviceradar_agent_gateway/status_buffer.ex <ul><li>New GenServer for buffering result payloads during core outages<br> <li> Implements bounded in-memory queue with configurable size and flush <br>interval<br> <li> Periodically flushes buffered results back to core<br> <li> Drops oldest entries when buffer reaches capacity</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-557fa02eff90dabc3757fea28b49208bb07d03698bd485219bc7301058dbb2c4">+117/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>agent_config_generator.ex</strong><dd><code>Integrate sweep configuration into agent payload generation</code></dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/edge/agent_config_generator.ex <ul><li>Add <code>load_sweep_config/2</code> function to load compiled sweep configurations<br> <li> Integrate sweep config into agent payload via <code>ConfigServer</code><br> <li> Merge sweep config with existing sync and check configurations<br> <li> Handle missing sweep configs gracefully with empty defaults</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-f368b9b41fa062759f00ff6fcae314cc5a42bb1caca82a9069103a803df1f9d7">+35/-5</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>compiler.ex</strong><dd><code>Add Compiler behaviour for pluggable config compilation</code>&nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/agent_config/compiler.ex <ul><li>New behaviour module defining the compiler interface for config types<br> <li> Specifies callbacks for <code>compile/4</code>, <code>config_type/0</code>, and <br><code>source_resources/0</code><br> <li> Includes registry of available compilers (currently sweep)<br> <li> Provides <code>content_hash/1</code> utility for SHA256 hashing of configs</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-c853b548702e07ffa78e0a371869cd58f00b8ec13836220866d34298e047cbcd">+102/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>template_seeder.ex</strong><dd><code>Add sweep monitoring templates and alert rules</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/observability/template_seeder.ex <ul><li>Add passthrough log template for internal sweep logs<br> <li> Add <code>promote_missed_sweeps</code> log promotion rule for sweep events<br> <li> Add <code>repeated_missed_sweeps</code> stateful alert rule for repeated missed <br>executions<br> <li> Configure alert severity and grouping for sweep monitoring</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-23e36dfd159d88c93b9115b3a2879574bbf840e8177043375926287f7b06be0b">+49/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>create_version_history.ex</strong><dd><code>Add change for creating config version history</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/agent_config/changes/create_version_history.ex <ul><li>New Ash change that creates version history records on config updates<br> <li> Captures previous config state, hash, and source IDs before update<br> <li> Extracts actor information for audit trail<br> <li> Handles creation failures gracefully without blocking main operation</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-edf276f4ea9f73468d843cc8db84191727a3e37c3e964146c29419e91f120d50">+72/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>monitoring.pb.ex</strong><dd><code>Extend sweep completion status with execution tracking and scanner </code><br><code>stats</code></dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/proto/monitoring.pb.ex <ul><li>Add <code>execution_id</code> and <code>sweep_group_id</code> fields to <code>SweepCompletionStatus</code><br> <li> Add new <code>SweepScannerStats</code> message with packet, ring, retry, and port <br>metrics<br> <li> Include <code>scanner_stats</code> field in <code>SweepCompletionStatus</code> for detailed <br>statistics<br> <li> Add rate limit and drop rate tracking for network monitoring</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-c4072f56f73e61992b2401a13598f9edcf11b255ba343c79cc629782b0b0dca0">+22/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>config_invalidation_notifier.ex</strong><dd><code>Add notifier for config resource change invalidation</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/agent_config/config_invalidation_notifier.ex <ul><li>New Ash notifier for triggering cache invalidation on resource changes<br> <li> Monitors <code>ConfigInstance</code> and <code>ConfigTemplate</code> resource changes<br> <li> Publishes invalidation events via <code>ConfigPublisher</code> for cluster-wide <br>propagation<br> <li> Logs invalidation events for debugging</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-5bc1a50cf1eb09eaaa1dd8df41fbe73c3524a1ec4db43d5d45548c63fab94f39">+70/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>application.ex</strong><dd><code>Add config and rule seeding infrastructure to application</code></dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/application.ex <ul><li>Add <code>RuleSeeder</code> child process for seeding default observability rules<br> <li> Add <code>ConfigCache</code> GenServer for ETS-based config caching<br> <li> Add <code>ConfigServer</code> GenServer for config compilation orchestration<br> <li> Add maintenance and monitoring queues to Oban configuration</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-a9ffbf400b7f9b22cd8980c41286c54fe373f1f1a8684bb6a344a5fb39b178d0">+15/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>sweep_jobs.ex</strong><dd><code>Add SweepJobs domain for sweep configuration management</code>&nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/sweep_jobs.ex <ul><li>New Ash domain for network sweep job management<br> <li> Registers <code>SweepProfile</code>, <code>SweepGroup</code>, <code>SweepGroupExecution</code>, and <br><code>SweepHostResult</code> resources<br> <li> Provides documentation on sweep groups and device targeting DSL<br> <li> Enables AshAdmin for domain management</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-60717aee243422185a36e6d154b70446bd696626d7bc79b809d0fbfd716cb5f0">+50/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>sync_ingestor.ex</strong><dd><code>Add tags field support to device sync ingestion</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/inventory/sync_ingestor.ex <ul><li>Add <code>tags</code> field extraction from sync updates<br> <li> Initialize <code>tags</code> as empty map in device creation<br> <li> Support tags in both update and default device structures</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-fdf70a310cef758f735fae943c2a33bc7f851a1c3d1ba66499e911fd2bc5611a">+3/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>agent_config.ex</strong><dd><code>Add AgentConfig domain for configuration management</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/agent_config.ex <ul><li>New Ash domain for agent configuration management<br> <li> Registers <code>ConfigTemplate</code>, <code>ConfigInstance</code>, and <code>ConfigVersion</code> resources<br> <li> Provides pluggable compiler pattern for config types<br> <li> Enables AshAdmin for domain management</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-320ea8054916838bfb11d6279fb7ce58837ade6d9c9885368f02957be135c50b">+40/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>device.ex</strong><dd><code>Add tags attribute to Device resource</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/inventory/device.ex <ul><li>Add <code>tags</code> attribute as a map for user-defined labels<br> <li> Include <code>tags</code> in create and update actions<br> <li> Default to empty map for new devices</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-f5671d34fab1f3bdb0bcc4db602074e03c803bb379bb530c54da9925cae883f2">+8/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>query_builder_components.ex</strong><dd><code>Add query builder UI components</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> web-ng/lib/serviceradar_web_ng_web/components/query_builder_components.ex <ul><li>New component module for query builder UI elements<br> <li> Implements <code>query_builder_pill/1</code> component for displaying query <br>criteria<br> <li> Provides visual styling with icons and connection lines for nested <br>criteria</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-5886ef9439ff16d903c1e447d50fd81c93826b302ba0969f7117fba883e525b6">+24/-0</a>&nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Configuration changes</strong></td><td><details><summary>5 files</summary><table> <tr> <td> <details> <summary><strong>zen_rule_seeder.ex</strong><dd><code>Add Passthrough Zen Rule for Sweep Logs</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/lib/serviceradar/observability/zen_rule_seeder.ex <ul><li>Added new passthrough Zen rule for internal sweep logs with subject <br><code>logs.internal.sweep</code><br> <li> Configured with default agent, enabled state, and order priority of <br>100</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-020128ea25bd10b783725e19d2de73d039c5bfb116a80f13702617bf278e6801">+11/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>20260111200000_add_sweep_jobs_tables.exs</strong><dd><code>Database schema for sweep jobs domain</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/priv/repo/tenant_migrations/20260111200000_add_sweep_jobs_tables.exs <ul><li>Created four new tables: <code>sweep_profiles</code>, <code>sweep_groups</code>, <br><code>sweep_group_executions</code>, <code>sweep_host_results</code><br> <li> Added indexes for tenant partitioning, partition/agent lookups, and <br>execution tracking<br> <li> Defined foreign key relationships between groups, executions, and <br>results<br> <li> Included fields for schedule configuration, device targeting, and scan <br>settings</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-8fe0ddbd0de23b297d28025ec21429230db5315e12360e0080c3f4d62e118b8e">+254/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>20260111194010_add_agent_config_tables.exs</strong><dd><code>Agent configuration management database schema</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/priv/repo/tenant_migrations/20260111194010_add_agent_config_tables.exs <ul><li>Created agent configuration versioning tables: <code>agent_config_versions</code>, <br><code>agent_config_templates</code>, <code>agent_config_instances</code><br> <li> Added indexes for tenant/type/partition/agent lookups and version <br>tracking<br> <li> Implemented foreign key relationships for template and instance <br>references<br> <li> Included fields for config compilation, hashing, and delivery tracking</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-ab37e828b85ed413aaf09cb439ecb0f6e18de2fbedaa219307771390cb5a31f6">+185/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>config.exs</strong><dd><code>Configure Ash domains and Oban workers for sweep jobs</code>&nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/config/config.exs <ul><li>Add <code>ServiceRadar.AgentConfig</code> and <code>ServiceRadar.SweepJobs</code> to Ash domains<br> <li> Add maintenance and monitoring queues to Oban configuration<br> <li> Add <code>SweepDataCleanupWorker</code> cron job (daily at 3 AM)<br> <li> Add <code>SweepMonitorWorker</code> cron job (every 5 minutes)</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-42b3888cc53d9dcf4ebc45ec7fffb2c672b129bffe763b6c76de58e4678a13a8">+9/-3</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>20260111210000_add_sweep_cleanup_indexes.exs</strong><dd><code>Add database indexes for sweep data cleanup</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/priv/repo/tenant_migrations/20260111210000_add_sweep_cleanup_indexes.exs <ul><li>Add index on <code>sweep_host_results.inserted_at</code> for cleanup queries<br> <li> Add filtered index on <code>sweep_group_executions.started_at</code> for <br>completed/failed executions<br> <li> Enables efficient batch deletion of old sweep data</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-e0f47d34ff9c8ed951c9622ce1a53c3fde4303bc78d169b81a15a4c260ebb788">+38/-0</a>&nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Tests</strong></td><td><details><summary>5 files</summary><table> <tr> <td> <details> <summary><strong>sweep_compiler_test.exs</strong><dd><code>Sweep compiler and target criteria test suite</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/test/serviceradar/sweep_jobs/sweep_compiler_test.exs <ul><li>Added comprehensive test suite for sweep config validation and <br>consistency<br> <li> Tests for <code>TargetCriteria</code> DSL operators (eq, neq, contains, in_cidr, <br>has_any, etc.)<br> <li> Validates config structure for agent consumption with required fields <br>and types<br> <li> Tests deterministic config hashing and device filtering logic<br> <li> Includes integration tests for sweep group structure and target <br>validation</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-57f7deaf4baa32135b553fb62cb1679f9f40030a4459b4482df9452b6b964550">+363/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>sweep_results_flow_e2e_test.exs</strong><dd><code>Add E2E test for sweep results ingestion and device updates</code></dd></summary> <hr> elixir/serviceradar_core/test/serviceradar/sweep_jobs/sweep_results_flow_e2e_test.exs <ul><li>New end-to-end integration test for sweep results ingestion flow<br> <li> Tests device creation and updates from sweep results<br> <li> Verifies execution statistics tracking and host result storage<br> <li> Validates discovery source management and availability status updates</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-d400921d5787a7de52aab03e0725f804a083c2170bf62c4ede876bb4e73a5c70">+163/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>target_criteria_test.exs</strong><dd><code>Add unit tests for device targeting criteria DSL</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/test/serviceradar/sweep_jobs/target_criteria_test.exs <ul><li>New unit tests for <code>TargetCriteria</code> module covering tag operators<br> <li> Tests range and numeric comparison operators for device targeting<br> <li> Validates null checks and criteria validation logic<br> <li> Tests Ash filter conversion for database queries</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-4adb502a7438004de2a88b74dd152eaa1562e1e0e06f663c6cdb501037e6325c">+92/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>sweep_config_distribution_integration_test.exs</strong><dd><code>Add integration test for sweep config distribution</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> elixir/serviceradar_core/test/serviceradar/edge/sweep_config_distribution_integration_test.exs <ul><li>New integration test for sweep configuration distribution to agents<br> <li> Tests config compilation from SweepGroup and SweepProfile resources<br> <li> Verifies target device inclusion and profile settings in compiled <br>config<br> <li> Validates agent payload generation with sweep configuration</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-0ed0dbd0a4802e88423a72a294bf9ec2b573aee4f51ad317a5798822d323ece2">+103/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>networks_live_test.exs</strong><dd><code>Add tests for network sweep configuration UI</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> web-ng/test/serviceradar_web_ng_web/live/settings/networks_live_test.exs <ul><li>New LiveView tests for network sweep configuration UI<br> <li> Tests listing sweep groups and profiles in settings<br> <li> Tests tab switching between groups and profiles<br> <li> Tests form rendering for creating new groups and profiles</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-81736ddb52f2c677276e60e51726c3fd0fb96f6d4bd349649f0b381e28e48088">+72/-0</a>&nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Formatting</strong></td><td><details><summary>2 files</summary><table> <tr> <td> <details> <summary><strong>srql_components.ex</strong><dd><code>Component naming refactor for query builder</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> web-ng/lib/serviceradar_web_ng_web/components/srql_components.ex <ul><li>Renamed <code>srql_builder_pill</code> component calls to <code>query_builder_pill</code> <br>throughout the file<br> <li> Updated 14 component references to use the new naming convention<br> <li> Added import for <code>QueryBuilderComponents</code> module</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-aeeb844af4bc736fac30dab33b73c6d9023ffa5c75b79dacfbc287b61524ec59">+17/-33</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>show.ex</strong><dd><code>Reformat agent show view task definitions</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> web-ng/lib/serviceradar_web_ng_web/live/agent_live/show.ex <ul><li>Reformat Task.async calls for improved readability<br> <li> Split long lines into multiple lines with proper indentation<br> <li> No functional changes, purely formatting improvements</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-5e622205d1abddd8ad7dcf7a8ca1be583804d622d3d38b75140e9b909cf0534a">+18/-6</a>&nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Additional files</strong></td><td><details><summary>63 files</summary><table> <tr> <td><strong>MODULE.bazel</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-6136fc12446089c3db7360e923203dd114b6a1466252e71667c6791c20fe6bdc">+1/-10</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>server.rs</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-2c4395fee16396339c3eea518ad9bec739174c67c9cedf62e6848c17136dd33e">+2/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>grpc_server.rs</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-e4564a93f6cf84ff91cd3d8141fc9272ec9b4ec19defd107afa42be01fcfed5b">+2/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>main.rs</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-33b655d8730ae3e9c844ee280787d11f1b0d5343119188273f89558805f814ba">+2/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>network-sweeps.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-0d5e993b4386d18c84c19fd8ff631ce6fdda85d2057853d344bb8852d06301d3">+102/-0</a>&nbsp; </td> </tr> <tr> <td><strong>troubleshooting-guide.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-343b87dd0b430b3f99b16d32200c353bb6e3d7bbb185da3c1b3effc3a03e7f2a">+8/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>web-ui.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-7e4dc17ccc5b2fd5773b7e6c9583dbe0f597ab9ae21849a2ec0a31554a6337a3">+17/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>test.exs</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-d8fffec56a9bc7305c02bbdd424e21031ecbbb2c2e6ba07ba626e6043540e360">+3/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>compute_config_hash.ex</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-f2924c6800a1a9976c59b736ae3e76d86fe566d4e839a3790567439e6cd97631">+28/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>increment_version.ex</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-7d5e93c82f641a45257a70d82b43dc9b7070be776ca1e79763a1b3cb5520cecf">+25/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>20260111220000_add_scanner_metrics_to_executions.exs</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-e4097d73b5edaaea9c94a88208ed0129d51417f421ac76831e841879533d765a">+26/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>20260111231229_add_device_tags.exs</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-8cfdc781c9f37c20321083bccf1d8ab766c6785994f2f95230d0791e61051543">+19/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>20260111194010.json</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-8b055d5e370b56c1b506d04de125f95900feea080b6dbb65a2c87a4f1372fada">+301/-0</a>&nbsp; </td> </tr> <tr> <td><strong>20260111194010.json</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-f255ad01e3ed524511b41c148545fb03c7d4afb10346a98ac1f9add4e5a32286">+174/-0</a>&nbsp; </td> </tr> <tr> <td><strong>20260111194010.json</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-d5f8f29013579541b53860ceeaafbd56b92aa0fecd4453ad4e91662198850f14">+228/-0</a>&nbsp; </td> </tr> <tr> <td><strong>20260111231229.json</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-78e268a6fea4614c894f4baad7fede56676b850ed71e622975f7fc77f9a648af">+533/-0</a>&nbsp; </td> </tr> <tr> <td><strong>20260111231229.json</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-47f629e5e8998059a23f6e5d04717ab5b315112fa281a7f5adcd52a673a47255">+290/-0</a>&nbsp; </td> </tr> <tr> <td><strong>20260111231229.json</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-23eeb3de41a3d19b2c9b49f504f8492dfc30b1f4f9e0311f01b11d2b5dddd918">+349/-0</a>&nbsp; </td> </tr> <tr> <td><strong>20260111231229.json</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-da3812ca40cdd9a941f30de247b875e220e4a321076182a5d50d3cc3dc689cfd">+267/-0</a>&nbsp; </td> </tr> <tr> <td><strong>20260111231229.json</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-46eab3c6900bb1afb267f88fbb160f779f0dbdde481b37eb5ecdee9b567d58c0">+212/-0</a>&nbsp; </td> </tr> <tr> <td><strong>20260111194010.json</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-ba377c2fa4c5eca11d37d65e828b2107f03f7e223bf0c9121024a85d5779505e">+246/-0</a>&nbsp; </td> </tr> <tr> <td><strong>design.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-1b189f75058c304d437050097d758deadb6af6b1f994880721aaf3ae121459e4">+75/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>proposal.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-46502c38a14321ad2d0e6fc7ee8fa53b8aa3fe16b2b83248202e5e894807ecb0">+20/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>spec.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-89269af2d0f44461b0987d0ca90b7a1bbf0edf29d791e119756c188ab53ddb1a">+30/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>tasks.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-2d62279edd50504c053ff7ccb4a3bdd58c23eaaca83b0ee1a875e22e611d9cd2">+34/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>design.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-ae0ce4aff2c9c1b07bd6c8ead005aa62f2fdee4917f5d6f9692d5d5f372ab141">+377/-0</a>&nbsp; </td> </tr> <tr> <td><strong>proposal.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-fa41dca6cbb84d9b96b44d8fa1939c1f2332e05a807ba219da16c3cc722e199a">+61/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>spec.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-d3cb7aa6f999ab423bf0621a680b50e4465c2f1c7feee10c94387c8423ad5cb4">+128/-0</a>&nbsp; </td> </tr> <tr> <td><strong>spec.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-187fbe69ad7dd8fe90e276f23db528b64e969571d5b08332135a72b26035f443">+39/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>spec.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-6f67209188549f2e9024bef3e5a5f7096c6422ee4c101709403f695971e32841">+263/-0</a>&nbsp; </td> </tr> <tr> <td><strong>spec.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-d2c9834a90b4210a504067222d6c2acbf36c07158a1b8377771f4e9bacadeb06">+102/-0</a>&nbsp; </td> </tr> <tr> <td><strong>tasks.md</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-0032fb7a412d25916760a7e4524a3336816a780ba684233004b16f4ca1a53f8c">+174/-0</a>&nbsp; </td> </tr> <tr> <td><strong>BUILD.bazel</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-9763df8132fa8d8919489fdfaf2434921b436714eb2aa276dca0ea4f92c02ec5">+2/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>push_loop.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-5f0d59be34ef26b449d7f5fd2b198a29b277936b9708a699f7487415ed6c2785">+45/-2</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>server.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-3c51e5356a25859a3300ab62ed2494462feb2567ef69e6db3aa2bbc1032c1c5d">+134/-485</a></td> </tr> <tr> <td><strong>server_test.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-bd96a3d3da5a6788f2a8df14f82bd22b96dde41f7067d20484ca0b5abeaf264e">+42/-490</a></td> </tr> <tr> <td><strong>sweep_config_gateway.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-0590118d93b1be7997233109d892bea154ed59038b8069290db99dc03080d307">+197/-0</a>&nbsp; </td> </tr> <tr> <td><strong>sweep_performance_test.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-77b4e4256ad3aa348ecec7e2548e00f7b23cdaccde4322f7ed18be4cc7d5148d">+7/-47</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>sweep_service.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-988114dc491154ea9a249227411b24e463212831cd81b43e0844862ce7a41343">+94/-34</a>&nbsp; </td> </tr> <tr> <td><strong>types.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-e3a3767c816cdb568a387a32243f7348046f1f3445549cc06368870c914496cd">+13/-12</a>&nbsp; </td> </tr> <tr> <td><strong>metrics.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-1c5e2a501867b2fe257cc13a4ec0a49edf5b97f7c8c1c511e0787803fa692314">+3/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>sweep.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-3cc7dd0e748c9f77be9e384fed2703ab77375716524b70860153b6a1abae27ca">+36/-1</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>interfaces.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-63bb5df96cdf14d46b202d0b39c061af58a00a5f73660a9083d894e9f21128e2">+6/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>BUILD.bazel</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-adb8cc16a3f4d1d26cd3c2b0e878e5024d154711e4d65824ac9868c7ad6b14aa">+0/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>aggregation_test.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-0c6eb392da3429542428a3c3242866632ba312a670e5face8b1d295fca100d82">+9/-12</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>availability_logic_test.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-3602efd19c56c4c95cfde64c4e937d84773d7f2d529f184cfd89aaaec31eb852">+5/-8</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>interfaces.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-9fcc03d2b04ba7aa9e3123be8b7a8c1e2f953b6bb4bae83884a6ae9335c178ff">+2/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>sweeper.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-d11dee1504de681453c5a04e22ae44d64820221ac6b84a1630d3a3929dace47a">+123/-557</a></td> </tr> <tr> <td><strong>sweeper_prune_test.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-8ce87b06d677d2de133ffe3c2973bf43e0d2d8b3bd1e120c393a7777b17a1c6f">+2/-2</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>sweeper_test.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-7714a2d9e6f06cd0d5944247437dda7796228b95a507478e92f0fc6e09179424">+2/-1062</a></td> </tr> <tr> <td><strong>tcp_optimization_test.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-1218aed38178a35178132b397803ada99d736109eacf734a56014f77631ea076">+6/-29</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>types.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-8f1a81192d004a38415447ef54a364471f7404d03d6c28fc4ade5f0e181bc1ca">+0/-77</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>monitoring.pb.go</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-4f7e955b42854cc9cf3fb063b95e58a04f36271e6a0c1cb42ea6d7953dd96cc4">+270/-69</a></td> </tr> <tr> <td><strong>monitoring.proto</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-c4fc4e9693e4d2cf45697113ccca65b8c5ff18d2037e31ade411473533b36c2b">+33/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>devices.rs</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-3202f22fff6863ed7848a129c49e2323322462b379d896d3fca2e59aa6f7b4c5">+71/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>config.exs</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-da0b524f083d350207155c247526098fdd68866d64e7e4eae3d96fdae31bcfb6">+6/-2</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>serviceradar_web_ng_web.ex</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-bfeb4e8164b81e0ff846081b11224a63282c549425d41e300932d79d164155ae">+1/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>settings_components.ex</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-540f72cd405a3221e6f350afd04e644db83f0a504938a8907e94c417d070601e">+5/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>ui_components.ex</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-ea4114e247de47a7a6261975047564b934b013e368aa2a7546a7853a9eca0f86">+8/-2</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>index.ex</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-d0d22a987aa942ef25088696daea862e36bd8ca083ce29e4670f63e62947c62f">+1/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>index.ex</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-61d0262af13a42905ebbd793e83537e61bfc09493df1664a82eb2536980ee1cd">+6/-6</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>router.ex</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-df516cd33165cd85914c1ccb3ff6511d3fe688d4a66498b99807958998c5d07c">+8/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>catalog.ex</strong></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2263/files#diff-80c07c25c17e48bf86860ec91db10256b7700a863d5371798e1893e741dc0e15">+11/-1</a>&nbsp; &nbsp; </td> </tr> </table></details></td></tr></tbody></table> </details> ___
qodo-code-review[bot] commented 2026-01-12 03:07:17 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/2263#issuecomment-3736732746
Original created: 2026-01-12T03:07:17Z

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
🔴
Atom exhaustion

Description: User-controlled tab values are converted with String.to_atom/1, enabling unbounded atom
creation (a VM-wide DoS vector) if an attacker can send arbitrary tab strings.
index.ex [230-232]

Referred Code
def handle_event("switch_tab", %{"tab" => tab}, socket) do
  {:noreply, assign(socket, :active_tab, String.to_atom(tab))}
end

Authorization bypass

Description: Multiple Ash reads/gets are executed with authorize?: false, which can bypass
authorization checks and allow unauthorized users to read sweep groups/profiles/executions
depending on routing/access controls.
index.ex [1900-1947]

Referred Code
defp load_sweep_groups(scope) do
  case Ash.read(SweepGroup, scope: scope, authorize?: false) do
    {:ok, groups} -> groups
    {:error, _} -> []
  end
end

defp load_sweep_group(scope, id) do
  case Ash.get(SweepGroup, id, scope: scope, authorize?: false) do
    {:ok, group} -> group
    {:error, _} -> nil
  end
end

defp load_sweep_profiles(scope) do
  case Ash.read(SweepProfile, scope: scope, authorize?: false) do
    {:ok, profiles} -> profiles
    {:error, _} -> []
  end
end




 ... (clipped 27 lines)
Query injection

Description: SRQL is constructed from user-provided criteria and escape_srql_value/1 only quotes spaces
(not other SRQL metacharacters), making SRQL query manipulation/injection plausible (e.g.,
injecting operators/parentheses/negations) if SRQL parsing permits it.
index.ex [2216-2353]

Referred Code
# Device count query using SRQL
defp get_matching_device_count(scope, criteria) do
  srql_query = criteria_to_srql_query(criteria)

  if srql_query == "" do
    nil
  else
    srql_module =
      Application.get_env(:serviceradar_web_ng, :srql_module, ServiceRadarWebNG.SRQL)

    full_query = "in:devices #{srql_query} limit:10000"

    case srql_module.query(full_query, %{scope: scope}) do
      {:ok, %{"results" => results}} when is_list(results) -> length(results)
      _ -> nil
    end
  end
end

defp criteria_to_srql_query(criteria) when criteria == %{}, do: ""




 ... (clipped 117 lines)
Ticket Compliance
🟡
🎫 #2248
🟢 Add a new Settings UI "Network" area for admins/operators to configure network sweeper
settings.
Admins can view configured sweep jobs under Networks settings and can edit/delete them.
Admin UI shows operational status such as when jobs last ran and other status indicators.
Admins can configure sweep jobs directly from the Settings UI (not only via bulk edit).
Admins can target devices by properties like IP range/CIDR, tags (discovery_sources),
partition, etc., so jobs can apply to matching devices dynamically.
Provide admin-managed controls for what protocols/ports are available (e.g., scanner
profiles) for tenant users/operators.
🔴 Partition selection is mandatory; "default" is always available.
Allow bulk-edit configuration of network scanner settings across a selection of devices or
the entire inventory.
Support defining scan scope using IPs/CIDRs (single IPs represented as /32) and
ports/protocols to check.
Allow optional selection of a specific agent; if not selected, any available agent in the
selected partition can deliver config over gRPC and run the sweep.
Generate and distribute a new agent config for sweep jobs; agent polls periodically over
gRPC for config.
Ensure sweep results flow via the new architecture: agent pushes to agent-gateway
(tenant-aware via mTLS), agent-gateway forwards to core (RPC/chunking/streaming), and core
processes results (DIRE) updating device inventory (e.g., ocsf_devices) and enriching
availability.
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

🔴
Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status: 🏷️
Silent error handling: Data-load failures from Ash.read/2 and Ash.get/4 are silently converted into empty lists
or nil without logging or actionable context, hindering production debugging.

Referred Code
defp load_sweep_groups(scope) do
  case Ash.read(SweepGroup, scope: scope, authorize?: false) do
    {:ok, groups} -> groups
    {:error, _} -> []
  end
end

defp load_sweep_group(scope, id) do
  case Ash.get(SweepGroup, id, scope: scope, authorize?: false) do
    {:ok, group} -> group
    {:error, _} -> nil
  end
end

defp load_sweep_profiles(scope) do
  case Ash.read(SweepProfile, scope: scope, authorize?: false) do
    {:ok, profiles} -> profiles
    {:error, _} -> []
  end
end




 ... (clipped 27 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status: 🏷️
Authorization bypass: Multiple reads/gets for sweep configuration and executions explicitly set authorize?:
false, bypassing authorization checks for user-accessible LiveView data operations.

Referred Code
defp load_sweep_groups(scope) do
  case Ash.read(SweepGroup, scope: scope, authorize?: false) do
    {:ok, groups} -> groups
    {:error, _} -> []
  end
end

defp load_sweep_group(scope, id) do
  case Ash.get(SweepGroup, id, scope: scope, authorize?: false) do
    {:ok, group} -> group
    {:error, _} -> nil
  end
end

defp load_sweep_profiles(scope) do
  case Ash.read(SweepProfile, scope: scope, authorize?: false) do
    {:ok, profiles} -> profiles
    {:error, _} -> []
  end
end




 ... (clipped 27 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status: 🏷️
Missing audit logging: Critical configuration actions (create/update/delete/toggle of sweep groups and profiles)
are performed without any explicit audit log entries that capture actor, action, and
outcome.

Referred Code
def handle_event("toggle_group", %{"id" => id}, socket) do
  scope = socket.assigns.current_scope

  case load_sweep_group(scope, id) do
    nil ->
      {:noreply, put_flash(socket, :error, "Sweep group not found")}

    group ->
      action = if group.enabled, do: :disable, else: :enable

      case Ash.update(group, action, scope: scope) do
        {:ok, _updated} ->
          {:noreply,
           socket
           |> assign(:sweep_groups, load_sweep_groups(scope))
           |> put_flash(
             :info,
             "Sweep group #{if action == :enable, do: "enabled", else: "disabled"}"
           )}

        {:error, _} ->



 ... (clipped 45 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status: 🏷️
Unstructured logging: New log statements emit unstructured string logs (and include identifiers like
tenant_slug) rather than structured fields, requiring verification against the
project's logging standards and sensitivity classification.

Referred Code
      Logger.info(
        "Forwarded sync results to StatusHandler on #{node} for tenant=#{tenant_slug}"
      )
    else
      Logger.debug("Forwarded status to StatusHandler on #{node} for tenant=#{tenant_slug}")
    end

    :ok
  catch
    :exit, reason ->
      Logger.warning("Failed to forward status to core on #{node}: #{inspect(reason)}")
      {:error, :forward_failed}
  end

{:error, :not_found} ->
  if is_sync_results do
    Logger.warning("No StatusHandler found on any node for sync results")
  else
    Logger.debug("No StatusHandler found on any node")
  end

Learn more about managing compliance generic rules or creating your own custom rules

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
- Requires Further Human Verification
🏷️ - Compliance label
Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/2263#issuecomment-3736732746 Original created: 2026-01-12T03:07:17Z --- ## PR Compliance Guide 🔍 <!-- https://github.com/carverauto/serviceradar/commit/ea2aeb135749916d851f5d345a5527f008550124 --> Below is a summary of compliance checks for this PR:<br> <table><tbody><tr><td colspan='2'><strong>Security Compliance</strong></td></tr> <tr><td rowspan=1>🔴</td> <td><details><summary><strong>Atom exhaustion</strong></summary><br> <b>Description:</b> User-controlled <code>tab</code> values are converted with <code>String.to_atom/1</code>, enabling unbounded atom <br>creation (a VM-wide DoS vector) if an attacker can send arbitrary tab strings.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/2263/files#diff-b2127e71582033bc6dfd2d7397f56bf43c1f7c613defffc504b3d8ee1e7406c4R230-R232'>index.ex [230-232]</a></strong><br> <details open><summary>Referred Code</summary> ```elixir def handle_event("switch_tab", %{"tab" => tab}, socket) do {:noreply, assign(socket, :active_tab, String.to_atom(tab))} end ``` </details></details></td></tr> <tr><td rowspan=2>⚪</td> <td><details><summary><strong>Authorization bypass</strong></summary><br> <b>Description:</b> Multiple Ash reads/gets are executed with <code>authorize?: false</code>, which can bypass <br>authorization checks and allow unauthorized users to read sweep groups/profiles/executions <br>depending on routing/access controls.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/2263/files#diff-b2127e71582033bc6dfd2d7397f56bf43c1f7c613defffc504b3d8ee1e7406c4R1900-R1947'>index.ex [1900-1947]</a></strong><br> <details open><summary>Referred Code</summary> ```elixir defp load_sweep_groups(scope) do case Ash.read(SweepGroup, scope: scope, authorize?: false) do {:ok, groups} -> groups {:error, _} -> [] end end defp load_sweep_group(scope, id) do case Ash.get(SweepGroup, id, scope: scope, authorize?: false) do {:ok, group} -> group {:error, _} -> nil end end defp load_sweep_profiles(scope) do case Ash.read(SweepProfile, scope: scope, authorize?: false) do {:ok, profiles} -> profiles {:error, _} -> [] end end ... (clipped 27 lines) ``` </details></details></td></tr> <tr><td><details><summary><strong>Query injection</strong></summary><br> <b>Description:</b> SRQL is constructed from user-provided criteria and <code>escape_srql_value/1</code> only quotes spaces <br>(not other SRQL metacharacters), making SRQL query manipulation/injection plausible (e.g., <br>injecting operators/parentheses/negations) if SRQL parsing permits it.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/2263/files#diff-b2127e71582033bc6dfd2d7397f56bf43c1f7c613defffc504b3d8ee1e7406c4R2216-R2353'>index.ex [2216-2353]</a></strong><br> <details open><summary>Referred Code</summary> ```elixir # Device count query using SRQL defp get_matching_device_count(scope, criteria) do srql_query = criteria_to_srql_query(criteria) if srql_query == "" do nil else srql_module = Application.get_env(:serviceradar_web_ng, :srql_module, ServiceRadarWebNG.SRQL) full_query = "in:devices #{srql_query} limit:10000" case srql_module.query(full_query, %{scope: scope}) do {:ok, %{"results" => results}} when is_list(results) -> length(results) _ -> nil end end end defp criteria_to_srql_query(criteria) when criteria == %{}, do: "" ... (clipped 117 lines) ``` </details></details></td></tr> <tr><td colspan='2'><strong>Ticket Compliance</strong></td></tr> <tr><td>🟡</td> <td> <details> <summary>🎫 <a href=https://github.com/carverauto/serviceradar/issues/2248>#2248</a></summary> <table width='100%'><tbody> <tr><td rowspan=6>🟢</td> <td>Add a new Settings UI "Network" area for admins/operators to configure network sweeper <br>settings.<br></td></tr> <tr><td>Admins can view configured sweep jobs under Networks settings and can edit/delete them. </td></tr> <tr><td>Admin UI shows operational status such as when jobs last ran and other status indicators.<br></td></tr> <tr><td>Admins can configure sweep jobs directly from the Settings UI (not only via bulk edit). </td></tr> <tr><td>Admins can target devices by properties like IP range/CIDR, tags (<code>discovery_sources</code>), <br>partition, etc., so jobs can apply to matching devices dynamically.<br></td></tr> <tr><td>Provide admin-managed controls for what protocols/ports are available (e.g., scanner <br>profiles) for tenant users/operators.<br></td></tr> <tr><td rowspan=1>🔴</td> <td>Partition selection is mandatory; "default" is always available. </td></tr> <tr><td rowspan=5>⚪</td> <td>Allow bulk-edit configuration of network scanner settings across a selection of devices or <br>the entire inventory.<br></td></tr> <tr><td>Support defining scan scope using IPs/CIDRs (single IPs represented as /32) and <br>ports/protocols to check.<br></td></tr> <tr><td>Allow optional selection of a specific agent; if not selected, any available agent in the <br>selected partition can deliver config over gRPC and run the sweep.<br></td></tr> <tr><td>Generate and distribute a new agent config for sweep jobs; agent polls periodically over <br>gRPC for config.<br></td></tr> <tr><td>Ensure sweep results flow via the new architecture: agent pushes to agent-gateway <br>(tenant-aware via mTLS), agent-gateway forwards to core (RPC/chunking/streaming), and core <br>processes results (DIRE) updating device inventory (e.g., <code>ocsf_devices</code>) and enriching <br>availability.<br></td></tr> </tbody></table> </details> </td></tr> <tr><td colspan='2'><strong>Codebase Duplication Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary><strong>Codebase context is not defined </strong></summary> Follow the <a href='https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/'>guide</a> to enable codebase context checks. </details></td></tr> <tr><td colspan='2'><strong>Custom Compliance</strong></td></tr> <tr><td rowspan=2>🟢</td><td> <details><summary><strong>Generic: Meaningful Naming and Self-Documenting Code</strong></summary><br> **Objective:** Ensure all identifiers clearly express their purpose and intent, making code <br>self-documenting<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td> <details><summary><strong>Generic: Secure Error Handling</strong></summary><br> **Objective:** To prevent the leakage of sensitive system information through error messages while <br>providing sufficient detail for internal debugging.<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td rowspan=2>🔴</td> <td><details> <summary><strong>Generic: Robust Error Handling and Edge Case Management</strong></summary><br> **Objective:** Ensure comprehensive error handling that provides meaningful context and graceful <br>degradation<br> **Status:** 🏷️<br><a href='https://github.com/carverauto/serviceradar/pull/2263/files#diff-b2127e71582033bc6dfd2d7397f56bf43c1f7c613defffc504b3d8ee1e7406c4R1900-R1947'><strong>Silent error handling</strong></a>: Data-load failures from <code>Ash.read/2</code> and <code>Ash.get/4</code> are silently converted into empty lists <br>or <code>nil</code> without logging or actionable context, hindering production debugging.<br> <details open><summary>Referred Code</summary> ```elixir defp load_sweep_groups(scope) do case Ash.read(SweepGroup, scope: scope, authorize?: false) do {:ok, groups} -> groups {:error, _} -> [] end end defp load_sweep_group(scope, id) do case Ash.get(SweepGroup, id, scope: scope, authorize?: false) do {:ok, group} -> group {:error, _} -> nil end end defp load_sweep_profiles(scope) do case Ash.read(SweepProfile, scope: scope, authorize?: false) do {:ok, profiles} -> profiles {:error, _} -> [] end end ... (clipped 27 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td><details> <summary><strong>Generic: Security-First Input Validation and Data Handling</strong></summary><br> **Objective:** Ensure all data inputs are validated, sanitized, and handled securely to prevent <br>vulnerabilities<br> **Status:** 🏷️<br><a href='https://github.com/carverauto/serviceradar/pull/2263/files#diff-b2127e71582033bc6dfd2d7397f56bf43c1f7c613defffc504b3d8ee1e7406c4R1900-R1947'><strong>Authorization bypass</strong></a>: Multiple reads/gets for sweep configuration and executions explicitly set <code>authorize?: </code><br><code>false</code>, bypassing authorization checks for user-accessible LiveView data operations.<br> <details open><summary>Referred Code</summary> ```elixir defp load_sweep_groups(scope) do case Ash.read(SweepGroup, scope: scope, authorize?: false) do {:ok, groups} -> groups {:error, _} -> [] end end defp load_sweep_group(scope, id) do case Ash.get(SweepGroup, id, scope: scope, authorize?: false) do {:ok, group} -> group {:error, _} -> nil end end defp load_sweep_profiles(scope) do case Ash.read(SweepProfile, scope: scope, authorize?: false) do {:ok, profiles} -> profiles {:error, _} -> [] end end ... (clipped 27 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td rowspan=2>⚪</td> <td><details> <summary><strong>Generic: Comprehensive Audit Trails</strong></summary><br> **Objective:** To create a detailed and reliable record of critical system actions for security analysis <br>and compliance.<br> **Status:** 🏷️<br><a href='https://github.com/carverauto/serviceradar/pull/2263/files#diff-b2127e71582033bc6dfd2d7397f56bf43c1f7c613defffc504b3d8ee1e7406c4R234-R299'><strong>Missing audit logging</strong></a>: Critical configuration actions (create/update/delete/toggle of sweep groups and profiles) <br>are performed without any explicit audit log entries that capture actor, action, and <br>outcome.<br> <details open><summary>Referred Code</summary> ```elixir def handle_event("toggle_group", %{"id" => id}, socket) do scope = socket.assigns.current_scope case load_sweep_group(scope, id) do nil -> {:noreply, put_flash(socket, :error, "Sweep group not found")} group -> action = if group.enabled, do: :disable, else: :enable case Ash.update(group, action, scope: scope) do {:ok, _updated} -> {:noreply, socket |> assign(:sweep_groups, load_sweep_groups(scope)) |> put_flash( :info, "Sweep group #{if action == :enable, do: "enabled", else: "disabled"}" )} {:error, _} -> ... (clipped 45 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td><details> <summary><strong>Generic: Secure Logging Practices</strong></summary><br> **Objective:** To ensure logs are useful for debugging and auditing without exposing sensitive <br>information like PII, PHI, or cardholder data.<br> **Status:** 🏷️<br><a href='https://github.com/carverauto/serviceradar/pull/2263/files#diff-2d04050dff3ba2cc8153559e33a892f5f421982bf6dcbda7172857b3bf398a02R207-R226'><strong>Unstructured logging</strong></a>: New log statements emit unstructured string logs (and include identifiers like <br><code>tenant_slug</code>) rather than structured fields, requiring verification against the <br>project&#x27;s logging standards and sensitivity classification.<br> <details open><summary>Referred Code</summary> ```elixir Logger.info( "Forwarded sync results to StatusHandler on #{node} for tenant=#{tenant_slug}" ) else Logger.debug("Forwarded status to StatusHandler on #{node} for tenant=#{tenant_slug}") end :ok catch :exit, reason -> Logger.warning("Failed to forward status to core on #{node}: #{inspect(reason)}") {:error, :forward_failed} end {:error, :not_found} -> if is_sync_results do Logger.warning("No StatusHandler found on any node for sync results") else Logger.debug("No StatusHandler found on any node") end ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td align="center" colspan="2"> - [ ] Update <!-- /compliance --update_compliance=true --> </td></tr></tbody></table> <details><summary>Compliance status legend</summary> 🟢 - Fully Compliant<br> 🟡 - Partial Compliant<br> 🔴 - Not Compliant<br> ⚪ - Requires Further Human Verification<br> 🏷️ - Compliance label<br> </details>
github-advanced-security[bot] commented 2026-01-12 03:07:49 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR review comment.

Original author: @github-advanced-security[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/2263#discussion_r2680740763
Original created: 2026-01-12T03:07:49Z
Original path: pkg/agent/server.go
Original line: 914

Size computation for allocation may overflow

This operation, which is used in an allocation, involves a potentially large value and might overflow.

Show more details

Imported GitHub PR review comment. Original author: @github-advanced-security[bot] Original URL: https://github.com/carverauto/serviceradar/pull/2263#discussion_r2680740763 Original created: 2026-01-12T03:07:49Z Original path: pkg/agent/server.go Original line: 914 --- ## Size computation for allocation may overflow This operation, which is used in an [allocation](1), involves a [potentially large value](2) and might overflow. [Show more details](https://github.com/carverauto/serviceradar/security/code-scanning/89)
github-advanced-security[bot] commented 2026-01-12 03:07:49 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR review comment.

Original author: @github-advanced-security[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/2263#discussion_r2680740764
Original created: 2026-01-12T03:07:49Z
Original path: pkg/agent/server.go
Original line: 983

Size computation for allocation may overflow

This operation, which is used in an allocation, involves a potentially large value and might overflow.

Show more details

Imported GitHub PR review comment. Original author: @github-advanced-security[bot] Original URL: https://github.com/carverauto/serviceradar/pull/2263#discussion_r2680740764 Original created: 2026-01-12T03:07:49Z Original path: pkg/agent/server.go Original line: 983 --- ## Size computation for allocation may overflow This operation, which is used in an [allocation](1), involves a [potentially large value](2) and might overflow. [Show more details](https://github.com/carverauto/serviceradar/security/code-scanning/90)
qodo-code-review[bot] commented 2026-01-12 03:09:12 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/2263#issuecomment-3736735024
Original created: 2026-01-12T03:09:12Z

PR Code Suggestions

Latest suggestions up to d7444e5

CategorySuggestion                                                                                                                                    Impact
Incremental [*]
Fix misplaced module endings

Correct a syntax error by moving the get_sweep_tenant/1 function inside the
ServiceRadarWebNGWeb.DeviceLive.Show module and removing a misplaced end
keyword.

web-ng/lib/serviceradar_web_ng_web/live/device_live/show.ex [1862-1868]

+case Ash.read(query, authorize?: true) do
+  {:ok, results} when results != [] ->
+    %{results: results, total: length(results)}
 
+  _ ->
+    nil
+end

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies a critical syntax error where a misplaced end keyword would cause a compilation failure by defining get_sweep_tenant/1 outside the ServiceRadarWebNGWeb.DeviceLive.Show module.

High
Fix mismatched block endings

Remove the extra end from the emit_forward_metrics/4 function to fix a syntax
error that prevents compilation.

elixir/serviceradar_agent_gateway/lib/serviceradar_agent_gateway/status_processor.ex [255-278]

 defp emit_forward_metrics(result, status, from_buffer, started_at) do
   if should_buffer?(status) do
     duration_ms =
       System.monotonic_time()
       |> Kernel.-(started_at)
       |> System.convert_time_unit(:native, :millisecond)
 
     :telemetry.execute(
       [:serviceradar, :agent_gateway, :results, :forward],
       %{count: 1, duration_ms: duration_ms},
       %{
         result: result,
         from_buffer: from_buffer,
         service_type: status[:service_type],
         service_name: status[:service_name],
         agent_id: status[:agent_id],
         gateway_id: status[:gateway_id],
         tenant_slug: status[:tenant_slug],
         partition: status[:partition]
       }
     )
   end
-  end
 end
  • Apply / Chat
Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies a syntax error (an extra end) that would prevent the code from compiling, which is a critical issue.

High
Add pagination loop termination guard

Add guards to the fetch_all_uids_paginated function to prevent infinite
recursion by checking if the next_cursor has changed and if the last fetch
returned any results.

web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [1218-1224]

 next_cursor = Map.get(pagination, "next_cursor")
 new_acc = [uids | acc]
 
-if is_binary(next_cursor) do
+if is_binary(next_cursor) and next_cursor != cursor and uids != [] do
   fetch_all_uids_paginated(srql_module, scope, query, next_cursor, new_acc)
 else
   finalize_uid_acc(new_acc)
 end

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 8

__

Why: This suggestion addresses a critical potential issue where a faulty API response during pagination could cause an infinite loop, hanging the LiveView process. Adding checks for an empty result set and a non-changing cursor makes the pagination logic significantly more resilient.

Medium
Ensure safe JSONB parameter casting

Explicitly encode the new_tags map to a JSON string before passing it to the SQL
fragment to ensure reliable casting to jsonb in the database.

web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [281-291]

 Ash.bulk_update(query, :update, %{},
   scope: scope,
   return_records?: false,
   return_errors?: true,
   atomic_update: %{
     tags:
       expr(
-        fragment("coalesce(?, '{}'::jsonb) || (?::jsonb)", ^ref(:tags), ^new_tags)
+        fragment(
+          "coalesce(?, '{}'::jsonb) || (?::jsonb)",
+          ^ref(:tags),
+          ^Jason.encode!(new_tags)
+        )
       )
   }
 )

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly points out that implicit casting of a map to jsonb can be unreliable. Explicitly encoding the map to a JSON string with Jason.encode! makes the database operation more robust and less prone to driver-specific or version-specific issues.

Medium
Cap stored port list size

Limit the number of open_ports stored in the database to a reasonable maximum
(e.g., 1024) to prevent performance issues and database bloat.

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex [329-348]

 open_ports =
   open_ports_raw
   |> List.wrap()
   |> Enum.map(fn
     port when is_integer(port) ->
       port
 
     port when is_binary(port) ->
       case Integer.parse(port) do
         {parsed, ""} -> parsed
         _ -> nil
       end
 
     _ ->
       nil
   end)
   |> Enum.reject(&is_nil/1)
   |> Enum.filter(&(&1 >= 1 and &1 <= 65_535))
   |> Enum.uniq()
   |> Enum.sort()
+  |> Enum.take(1_024)
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: This is a valuable performance and scalability improvement, preventing potential database bloat and slow queries by capping the number of stored open ports, which could otherwise be very large.

Medium
Prevent tight-loop batch flushing

In the handle_info(:flush, ...) function, change the 0ms delay to a small
positive value (e.g., 10) when scheduling the next flush to prevent a tight loop
and improve process responsiveness.

elixir/serviceradar_agent_gateway/lib/serviceradar_agent_gateway/status_buffer.ex [79-83]

 if more? do
-  Process.send_after(self(), :flush, 0)
+  Process.send_after(self(), :flush, 10)
 else
   schedule_flush(state.flush_interval_ms)
 end
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies a potential for a tight loop that could starve the GenServer process. Introducing a small delay is a good practice to ensure the process remains responsive.

Medium
Use consistent progress map keys

Refactor the key access for the @execution_progress map into a variable to
improve readability and ensure consistent keying.

web-ng/lib/serviceradar_web_ng_web/live/settings/networks_live/index.ex [952-954]

+progress_key = execution.execution_id || execution.id
+
 progress={
-  Map.get(@execution_progress, execution.execution_id || execution.id)
+  Map.get(@execution_progress, progress_key)
 }

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 2

__

Why: The suggestion correctly identifies a potential key mismatch but the existing_code already attempts to handle it with execution.execution_id || execution.id. The proposed change only refactors this into a variable, offering a minor readability improvement without altering the logic.

Low
Possible issue
Prefer specific records deterministically

Modify the :for_agent read action to deterministically select the most specific
configuration by adding sorting and a limit, ensuring an agent-specific config
is preferred over a partition-default one.

elixir/serviceradar_core/lib/serviceradar/agent_config/config_instance.ex [69-79]

 read :for_agent do
   argument :config_type, :atom, allow_nil?: false
   argument :partition, :string, allow_nil?: false
   argument :agent_id, :string, allow_nil?: true
 
   filter expr(
            config_type == ^arg(:config_type) and
              partition == ^arg(:partition) and
              (is_nil(^arg(:agent_id)) or agent_id == ^arg(:agent_id) or is_nil(agent_id))
          )
+
+  prepare build(sort: [agent_id: :desc_nils_last])
+  prepare build(limit: 1)
 end
  • Apply / Chat
Suggestion importance[1-10]: 9

__

Why: This suggestion identifies a critical logical flaw where a non-deterministic query could lead to an agent receiving a generic configuration instead of its specific one, which could cause incorrect behavior.

High
Support IPv6 CIDR matching

Add support for IPv6 CIDR matching in ip_matches_cidr?/3 to prevent incorrect
targeting for IPv6 devices.

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/target_criteria.ex [320-329]

     defp ip_matches_cidr?(ip, network, mask)
          when tuple_size(ip) == 4 and tuple_size(network) == 4 and mask >= 0 and mask <= 32 do
       # IPv4
       ip_int = ipv4_to_int(ip)
       network_int = ipv4_to_int(network)
       mask_bits = bsl(0xFFFFFFFF, 32 - mask) &&& 0xFFFFFFFF
       (ip_int &&& mask_bits) == (network_int &&& mask_bits)
     end
 
+    defp ip_matches_cidr?(ip, network, mask)
+         when tuple_size(ip) == 8 and tuple_size(network) == 8 and mask >= 0 and mask <= 128 do
+      # IPv6 (128-bit)
+      ip_int = ipv6_to_int(ip)
+      network_int = ipv6_to_int(network)
+      mask_bits = bsl((1 <<< 128) - 1, 128 - mask)
+      (ip_int &&& mask_bits) == (network_int &&& mask_bits)
+    end
+
     defp ip_matches_cidr?(_ip, _network, _mask), do: false
 
+    defp ipv6_to_int({a, b, c, d, e, f, g, h}) do
+      bsl(a, 112) + bsl(b, 96) + bsl(c, 80) + bsl(d, 64) +
+        bsl(e, 48) + bsl(f, 32) + bsl(g, 16) + h
+    end
+
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: The suggestion identifies a significant bug where IPv6 addresses would silently fail to match CIDR criteria, leading to incorrect sweep targeting.

Medium
Parse SRQL count result

In get_matching_device_count, handle cases where the SRQL count is a string or
float, not just an integer, by parsing it to prevent incorrect nil results.

web-ng/lib/serviceradar_web_ng_web/live/settings/networks_live/index.ex [2294-2309]

 defp get_matching_device_count(scope, criteria) do
   srql_query = criteria_to_srql_query(criteria)
 
   if srql_query == "" do
     nil
   else
     srql_module =
       Application.get_env(:serviceradar_web_ng, :srql_module, ServiceRadarWebNG.SRQL)
 
     full_query = ~s|in:devices #{srql_query} stats:"count() as total"|
 
     case srql_module.query(full_query, %{scope: scope}) do
-      {:ok, %{"results" => [%{"total" => count} | _]}} when is_integer(count) -> count
-      _ -> nil
+      {:ok, %{"results" => [%{"total" => count} | _]}} ->
+        cond do
+          is_integer(count) ->
+            count
+
+          is_float(count) ->
+            trunc(count)
+
+          is_binary(count) ->
+            case Integer.parse(count) do
+              {int, ""} -> int
+              _ -> nil
+            end
+
+          true ->
+            nil
+        end
+
+      _ ->
+        nil
     end
   end
 end

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 7

__

Why: This suggestion correctly points out that relying on the count being an integer is too strict, as JSON decoding can produce strings or floats. The proposed change makes the function more robust by handling these other valid numeric types.

Medium
Stop parsing tenant IDs

Use the tenant_id variable directly for PubSub broadcasts instead of parsing it
from the tenant_schema string.

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex [102-103]

-    # Extract tenant_id for PubSub broadcasts
-    broadcast_tenant_id = extract_tenant_id_from_schema(tenant_schema)
+    # Use provided tenant_id for PubSub broadcasts (don't parse schema names)
+    broadcast_tenant_id = tenant_id
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly points out that parsing the schema name to get the tenant_id is brittle and a less robust approach than using the already available tenant_id variable.

Medium
Use safe Ecto table prefixes

Refactor the Ecto query to use the prefix option for dynamic schemas instead of
string interpolation, and use table/1 to specify the table name.

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_data_cleanup_worker.ex [118-143]

 base_query =
-  from(r in {schema <> "." <> table, resource},
+  from(r in table(^table),
+    prefix: schema,
     where: field(r, ^timestamp_field) < ^cutoff,
     order_by: [asc: field(r, ^timestamp_field)],
-    select: r.id,
+    select: field(r, :id),
     limit: ^batch_size
   )
 ...
 delete_query =
-  from(r in {schema <> "." <> table, resource},
-    where: r.id in ^ids
+  from(r in table(^table),
+    prefix: schema,
+    where: field(r, :id) in ^ids
   )

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies an unsafe and non-standard Ecto query pattern and proposes using the proper prefix option, which improves code robustness, security, and adherence to best practices.

Medium
Reset bulk selection state

In the clear_selection event handler, reset the select_all_matching and
total_matching_count assigns, in addition to selected_devices, to ensure the
bulk selection state is fully cleared.

web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [159-161]

 def handle_event("clear_selection", _params, socket) do
-  {:noreply, assign(socket, :selected_devices, MapSet.new())}
+  {:noreply,
+   socket
+   |> assign(:selected_devices, MapSet.new())
+   |> assign(:select_all_matching, false)
+   |> assign(:total_matching_count, nil)}
 end

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies a state management bug where clearing the selection does not reset the "select all" state, leading to an inconsistent UI for the new bulk actions feature.

Low
  • More

Previous suggestions

Suggestions up to commit 8f1186c
CategorySuggestion                                                                                                                                    Impact
Incremental [*]
Prevent runtime JSONB cast failures

Remove the explicit ::jsonb cast on the ^new_tags parameter within the fragment.
The database layer already handles map encoding, and this cast could cause
runtime errors.

web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [271-276]

 atomic_update: %{
   tags:
     expr(
-      fragment("coalesce(?, '{}'::jsonb) || (?::jsonb)", ref(:tags), ^new_tags)
+      fragment("coalesce(?, '{}'::jsonb) || ?", ref(:tags), ^new_tags)
     )
 }
Suggestion importance[1-10]: 8

__

Why: This suggestion correctly identifies a potential runtime error by removing an explicit jsonb cast on a parameter that is already a map, making the database operation more robust.

Medium
Resolve tenant correctly for history
Suggestion Impact:The change stopped reading tenant_id from changeset attributes and instead uses the tenant from the changeset context (changeset.tenant). It also adds a guard that errors when tenant is nil and passes the resolved tenant into version creation, aligning with the goal of correctly resolving tenant for history creation.

code diff:

-    tenant_id = Ash.Changeset.get_attribute(changeset, :tenant_id)
+    tenant = changeset.tenant
 
     # Extract actor info if available
     {actor_id, actor_email} = extract_actor_info(context)
 
-    # Create the version record
-    version_attrs = %{
-      config_instance_id: config_instance_id,
-      version: current_version,
-      compiled_config: current_config,
-      content_hash: current_hash,
-      source_ids: current_source_ids,
-      actor_id: actor_id,
-      actor_email: actor_email,
-      change_reason: "Config updated"
-    }
+    if is_nil(tenant) do
+      Ash.Changeset.add_error(changeset,
+        field: :tenant_id,
+        message: "tenant context is required to record config history"
+      )
+    else
+      # Create the version record
+      version_attrs = %{
+        config_instance_id: config_instance_id,
+        version: current_version,
+        compiled_config: current_config,
+        content_hash: current_hash,
+        source_ids: current_source_ids,
+        actor_id: actor_id,
+        actor_email: actor_email,
+        change_reason: "Config updated"
+      }
 
-    case create_version(version_attrs, tenant_id) do
-      {:ok, _version} ->
-        changeset
+      case create_version(version_attrs, tenant, context) do
+        {:ok, _version} ->
+          changeset
 
-      {:error, error} ->
-        Logger.warning("Failed to create config version history: #{inspect(error)}")
-        # Don't fail the main operation if history creation fails
-        changeset
+        {:error, error} ->
+          Logger.warning("Failed to create config version history: #{inspect(error)}")
+          # Don't fail the main operation if history creation fails
+          changeset
+      end
     end
   end
 
@@ -64,9 +71,11 @@
     end
   end
 
-  defp create_version(attrs, tenant_id) do
+  defp create_version(attrs, tenant, context) do
+    actor = Map.get(context, :actor)
+
     ServiceRadar.AgentConfig.ConfigVersion
     |> Ash.Changeset.for_create(:create, attrs)
-    |> Ash.create(authorize?: false, tenant: tenant_id)
+    |> Ash.create(authorize?: false, tenant: tenant, actor: actor)
   end

Correctly resolve the tenant_id by reading it from the existing record data or
the context, instead of from the changeset attributes where it may be missing
during an update.

elixir/serviceradar_core/lib/serviceradar/agent_config/changes/create_version_history.ex [26-31]

 config_instance_id = Ash.Changeset.get_data(changeset, :id)
 current_version = Ash.Changeset.get_data(changeset, :version) || 0
 current_config = Ash.Changeset.get_data(changeset, :compiled_config) || %{}
 current_hash = Ash.Changeset.get_data(changeset, :content_hash) || ""
 current_source_ids = Ash.Changeset.get_data(changeset, :source_ids) || []
-tenant_id = Ash.Changeset.get_attribute(changeset, :tenant_id)
+tenant_id = Ash.Changeset.get_data(changeset, :tenant_id) || Map.get(context, :tenant)

[Suggestion processed]

Suggestion importance[1-10]: 8

__

Why: This is a valid and important bug fix that prevents potential data corruption or loss of audit history in a multi-tenant system by ensuring the tenant_id is correctly resolved.

Medium
Fix progress lookup key mismatch
Suggestion Impact:Updated the progress lookup to use `execution.execution_id || execution.id` instead of `Map.get(execution, :execution_id) || execution.id`, aligning the key access with the expected `execution_id` field usage.

code diff:

@@ -916,7 +950,7 @@
               execution={execution}
               group={Map.get(@groups_map, execution.sweep_group_id)}
               progress={
-                Map.get(@execution_progress, Map.get(execution, :execution_id) || execution.id)
+                Map.get(@execution_progress, execution.execution_id || execution.id)
               }

Correct the progress lookup key to consistently use execution.execution_id. The
current fallback to execution.id may cause progress updates to fail if the id
and execution_id differ.

web-ng/lib/serviceradar_web_ng_web/live/settings/networks_live/index.ex [918-920]

 progress={
-  Map.get(@execution_progress, Map.get(execution, :execution_id) || execution.id)
+  Map.get(@execution_progress, execution.execution_id || execution.id)
 }
Suggestion importance[1-10]: 7

__

Why: This is a valid concern as the progress map is keyed by execution_id, and falling back to execution.id would fail if they differ, preventing real-time progress updates.

Medium
Prevent runtime crashes on map access
Suggestion Impact:The code mapping device records to canonical_device_id was changed from dot-access to Map.get(&1, :canonical_device_id), matching the suggestion and preventing potential KeyError crashes.

code diff:

       (available_ips ++ unavailable_ips)
       |> Enum.map(&Map.get(device_map, &1))
       |> Enum.reject(&is_nil/1)
-      |> Enum.map(& &1.canonical_device_id)
+      |> Enum.map(&Map.get(&1, :canonical_device_id))
       |> Enum.reject(&is_nil/1)

Replace unsafe dot-access on a map with Map.get/2 to safely retrieve the
:canonical_device_id and prevent potential runtime crashes if the key is absent.

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex [394-400]

 all_device_uids =
   (available_ips ++ unavailable_ips)
   |> Enum.map(&Map.get(device_map, &1))
   |> Enum.reject(&is_nil/1)
-  |> Enum.map(& &1.canonical_device_id)
+  |> Enum.map(&Map.get(&1, :canonical_device_id))
   |> Enum.reject(&is_nil/1)
   |> Enum.uniq()

[Suggestion processed]

Suggestion importance[1-10]: 7

__

Why: This suggestion correctly identifies that using dot-access (&1.canonical_device_id) is unsafe for maps and can cause runtime errors if the key is missing, proposing a switch to Map.get/2 which improves the code's robustness.

Medium
Emit metrics for all results
Suggestion Impact:Changed emit_forward_metrics/4 to gate telemetry emission with should_buffer?(status) instead of checking only status[:source] == "results", ensuring metrics are emitted for both string and atom sources.

code diff:

   defp emit_forward_metrics(result, status, from_buffer, started_at) do
-    if status[:source] == "results" do
+    if should_buffer?(status) do

Update the condition for emitting metrics to match the buffering logic, ensuring
metrics are captured for both string and atom versions of the results source.

elixir/serviceradar_agent_gateway/lib/serviceradar_agent_gateway/status_processor.ex [251-277]

 defp should_buffer?(status) do
   status[:source] in ["results", :results]
 end
 
 defp emit_forward_metrics(result, status, from_buffer, started_at) do
-  if status[:source] == "results" do
+  if should_buffer?(status) do
     duration_ms =
       System.monotonic_time()
       |> Kernel.-(started_at)
       |> System.convert_time_unit(:native, :millisecond)
 
     :telemetry.execute(
       [:serviceradar, :agent_gateway, :results, :forward],
       %{count: 1, duration_ms: duration_ms},
       %{
         result: result,
         from_buffer: from_buffer,
         service_type: status[:service_type],
         service_name: status[:service_name],
         agent_id: status[:agent_id],
         gateway_id: status[:gateway_id],
         tenant_slug: status[:tenant_slug],
         partition: status[:partition]
       }
     )
   end
 end

[Suggestion processed]

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies an inconsistency where telemetry metrics would be missed if status[:source] is an atom, improving observability by ensuring all relevant events are tracked.

Medium
Stabilize count return value

In get_total_matching_count, return 0 instead of nil on query failure. This
ensures type stability and prevents confusing UI messages like "All nil matching
device(s) selected".

web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [1183-1185]

 _ ->
-  nil
+  0
Suggestion importance[1-10]: 6

__

Why: The suggestion correctly points out that returning nil can lead to confusing UI output and makes a good case for returning 0 to maintain type stability and improve user experience.

Low
Avoid ETS race crash
Suggestion Impact:The commit addresses the same general concern (ETS table not existing / being terminated) by adding :ets.whereis(@table_name) == :undefined guards before performing ETS operations (insert/delete/select_delete/info). It does not implement the specific suggested try/rescue around :ets.lookup/2, but it applies a related defensive pattern to avoid crashes when the table is missing.

code diff:

+    if :ets.whereis(@table_name) == :undefined do
+      :ok
+    else
+      key = cache_key(tenant_id, config_type, partition, agent_id)
+      expires_at = System.monotonic_time(:millisecond) + ttl_ms
+
+      :ets.insert(@table_name, {key, entry, expires_at})
+      :ok
+    end
   end
 
   @doc """
@@ -104,13 +108,17 @@
   """
   @spec invalidate(String.t(), atom()) :: :ok
   def invalidate(tenant_id, config_type) do
-    # Match all keys for this tenant and config type
-    match_spec = [
-      {{{tenant_id, config_type, :_, :_}, :_, :_}, [], [true]}
-    ]
-
-    :ets.select_delete(@table_name, match_spec)
-    :ok
+    if :ets.whereis(@table_name) == :undefined do
+      :ok
+    else
+      # Match all keys for this tenant and config type
+      match_spec = [
+        {{{tenant_id, config_type, :_, :_}, :_, :_}, [], [true]}
+      ]
+
+      :ets.select_delete(@table_name, match_spec)
+      :ok
+    end
   end
 
   @doc """
@@ -118,12 +126,16 @@
   """
   @spec invalidate_tenant(String.t()) :: :ok
   def invalidate_tenant(tenant_id) do
-    match_spec = [
-      {{{tenant_id, :_, :_, :_}, :_, :_}, [], [true]}
-    ]
-
-    :ets.select_delete(@table_name, match_spec)
-    :ok
+    if :ets.whereis(@table_name) == :undefined do
+      :ok
+    else
+      match_spec = [
+        {{{tenant_id, :_, :_, :_}, :_, :_}, [], [true]}
+      ]
+
+      :ets.select_delete(@table_name, match_spec)
+      :ok
+    end
   end
 
   @doc """
@@ -131,9 +143,13 @@
   """
   @spec invalidate_key(String.t(), atom(), String.t(), String.t() | nil) :: :ok
   def invalidate_key(tenant_id, config_type, partition, agent_id) do
-    key = cache_key(tenant_id, config_type, partition, agent_id)
-    :ets.delete(@table_name, key)
-    :ok
+    if :ets.whereis(@table_name) == :undefined do
+      :ok
+    else
+      key = cache_key(tenant_id, config_type, partition, agent_id)
+      :ets.delete(@table_name, key)
+      :ok
+    end
   end
 
   @doc """
@@ -141,8 +157,12 @@
   """
   @spec clear_all() :: :ok
   def clear_all do
-    :ets.delete_all_objects(@table_name)
-    :ok
+    if :ets.whereis(@table_name) == :undefined do
+      :ok
+    else
+      :ets.delete_all_objects(@table_name)
+      :ok
+    end
   end
 
   @doc """
@@ -150,10 +170,14 @@
   """
   @spec stats() :: map()
   def stats do
-    %{
-      size: :ets.info(@table_name, :size),
-      memory_bytes: :ets.info(@table_name, :memory) * :erlang.system_info(:wordsize)
-    }
+    if :ets.whereis(@table_name) == :undefined do
+      %{size: 0, memory_bytes: 0}
+    else
+      %{
+        size: :ets.info(@table_name, :size),
+        memory_bytes: :ets.info(@table_name, :memory) * :erlang.system_info(:wordsize)
+      }
+    end
   end

Prevent a potential crash from a race condition by wrapping the :ets.lookup/2
call in a try/rescue block to handle cases where the ETS table is terminated
unexpectedly.

elixir/serviceradar_core/lib/serviceradar/agent_config/config_cache.ex [46-64]

 if :ets.whereis(@table_name) == :undefined do
   :miss
 else
   key = cache_key(tenant_id, config_type, partition, agent_id)
 
-  case :ets.lookup(@table_name, key) do
-    [{^key, entry, expires_at}] ->
-      if System.monotonic_time(:millisecond) < expires_at do
-        {:ok, entry}
-      else
-        # Expired - delete and return miss
-        :ets.delete(@table_name, key)
+  try do
+    case :ets.lookup(@table_name, key) do
+      [{^key, entry, expires_at}] ->
+        if System.monotonic_time(:millisecond) < expires_at do
+          {:ok, entry}
+        else
+          # Expired - delete and return miss
+          :ets.delete(@table_name, key)
+          :miss
+        end
+
+      [] ->
         :miss
-      end
-
-    [] ->
+    end
+  rescue
+    ArgumentError ->
       :miss
   end
 end

[Suggestion processed]

Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies a race condition with ETS table lookups and proposes a valid fix to improve the code's resilience against crashes, making the cache access more robust.

Low
Possible issue
Chunk large bulk updates

Refactor update_tags_for_uids to process updates in chunks. This prevents
potential database errors when applying bulk actions to a large number of
selected devices.

web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [258-297]

 defp update_tags_for_uids(scope, uids, new_tags) do
-  query =
-    Device
-    |> Ash.Query.for_read(:read, %{})
-    |> Ash.Query.filter(uid in ^uids)
+  uids
+  |> Enum.chunk_every(1000)
+  |> Enum.reduce_while({:ok, 0}, fn uid_chunk, {:ok, acc_count} ->
+    query =
+      Device
+      |> Ash.Query.for_read(:read, %{})
+      |> Ash.Query.filter(uid in ^uid_chunk)
 
-  case Ash.count(query, scope: scope) do
-    {:ok, existing_count} ->
-      result =
-        Ash.bulk_update(query, :update, %{},
-          scope: scope,
-          return_records?: false,
-          return_errors?: true,
-          atomic_update: %{
-            tags:
-              expr(
-                fragment("coalesce(?, '{}'::jsonb) || (?::jsonb)", ref(:tags), ^new_tags)
-              )
-          }
-        )
+    case Ash.count(query, scope: scope) do
+      {:ok, existing_count} ->
+        result =
+          Ash.bulk_update(query, :update, %{},
+            scope: scope,
+            return_records?: false,
+            return_errors?: true,
+            atomic_update: %{
+              tags:
+                expr(
+                  fragment("coalesce(?, '{}'::jsonb) || (?::jsonb)", ref(:tags), ^new_tags)
+                )
+            }
+          )
 
-      case result do
-        %Ash.BulkResult{status: :success} ->
-          if existing_count < length(uids) do
-            {:error, "One or more devices were not found"}
-          else
-            {:ok, existing_count}
-          end
+        case result do
+          %Ash.BulkResult{status: :success} ->
+            if existing_count < length(uid_chunk) do
+              {:halt, {:error, "One or more devices were not found"}}
+            else
+              {:cont, {:ok, acc_count + existing_count}}
+            end
 
-        %Ash.BulkResult{status: :partial_success, errors: errors} ->
-          {:error, format_changeset_errors(List.first(errors || []))}
+          %Ash.BulkResult{status: :partial_success, errors: errors} ->
+            {:halt, {:error, format_changeset_errors(List.first(errors || []))}}
 
-        %Ash.BulkResult{status: :error, errors: errors} ->
-          {:error, format_changeset_errors(List.first(errors || []))}
-      end
+          %Ash.BulkResult{status: :error, errors: errors} ->
+            {:halt, {:error, format_changeset_errors(List.first(errors || []))}}
+        end
 
-    {:error, error} ->
-      {:error, format_changeset_errors(error)}
-  end
+      {:error, error} ->
+        {:halt, {:error, format_changeset_errors(error)}}
+    end
+  end)
 end
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a potential scalability issue with large bulk updates and proposes a robust solution using chunking to prevent database errors, which is a critical fix for reliability.

Medium
Avoid infinite cleanup batching loops

Add a case to handle when Repo.delete_all/1 deletes zero records to prevent a
potential infinite loop in the do_cleanup_batch function.

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_data_cleanup_worker.ex [145-162]

 case Repo.delete_all(delete_query) do
+  {0, _} ->
+    Logger.warning(
+      "SweepDataCleanupWorker: Delete returned 0 for #{table} in #{schema}; stopping batch to avoid infinite loop"
+    )
+
+    %{acc | errors: acc.errors + 1}
+
   {count, _} ->
     Logger.debug(
       "SweepDataCleanupWorker: Deleted #{count} #{table} records from #{schema}"
     )
 
     # Continue with next batch
     do_cleanup_batch(
       schema,
       table,
       resource,
       timestamp_field,
       cutoff,
       batch_size,
       extra_filter,
       %{acc | deleted: acc.deleted + count}
     )
 end
Suggestion importance[1-10]: 8

__

Why: This suggestion correctly identifies and fixes a potential infinite loop in the cleanup worker, which is a significant robustness improvement.

Medium
Prevent crashes from bad types
Suggestion Impact:The patch replaces the direct div/2 call with logic that validates/parses the raw response time value into an integer (accepting integers and integer-strings) before dividing, otherwise returning nil. This prevents div/2 crashes on bad types, aligning with the suggestion’s goal (though it does not add explicit float handling).

code diff:

         # Parse response time (convert ns to ms)
-        response_time_ns = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"]
-        response_time_ms = if response_time_ns, do: div(response_time_ns, 1_000_000)
+        response_time_ns_raw = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"]
+
+        response_time_ns =
+          cond do
+            is_integer(response_time_ns_raw) ->
+              response_time_ns_raw
+
+            is_binary(response_time_ns_raw) ->
+              case Integer.parse(response_time_ns_raw) do
+                {parsed, ""} -> parsed
+                _ -> nil
+              end
+
+            true ->
+              nil
+          end
+
+        response_time_ms =
+          if is_integer(response_time_ns),
+            do: div(response_time_ns, 1_000_000),
+            else: nil

Add type checking before calling div/2 on response_time_ns to handle potential
float or string values and prevent crashes during result ingestion.

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex [304-305]

 response_time_ns = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"]
-response_time_ms = if response_time_ns, do: div(response_time_ns, 1_000_000)
 
+response_time_ms =
+  case response_time_ns do
+    ns when is_integer(ns) -> div(ns, 1_000_000)
+    ns when is_float(ns) -> ns |> trunc() |> div(1_000_000)
+    ns when is_binary(ns) ->
+      case Integer.parse(ns) do
+        {int, _rest} -> div(int, 1_000_000)
+        :error -> nil
+      end
+
+    _ ->
+      nil
+  end
+

[Suggestion processed]

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies that div/2 will crash with non-integer inputs, and the proposed change adds robust type checking to prevent ingestion failures from malformed data.

Medium
Include tenant id in actor
Suggestion Impact:The commit updates the fallback "system" actor map to include tenant_id: Scope.tenant_id(scope), matching the suggestion to ensure tenant-aware authorization.

code diff:

@@ -1876,10 +1881,15 @@
           tenant_id: Scope.tenant_id(scope)
         }
 
-      _ ->
-        %{id: "system", email: "system@serviceradar", role: :admin}
-    end
-  end
+    _ ->
+      %{
+        id: "system",
+        email: "system@serviceradar",
+        role: :admin,
+        tenant_id: Scope.tenant_id(scope)
+      }
+  end
+end

Add the tenant_id from the scope to the fallback "system" actor in
build_sweep_actor/1 to ensure correct authorization behavior.

web-ng/lib/serviceradar_web_ng_web/live/device_live/show.ex [1869-1882]

 defp build_sweep_actor(scope) do
   case scope do
     %{user: user} when not is_nil(user) ->
       %{
         id: user.id,
         email: user.email,
         role: user.role,
         tenant_id: Scope.tenant_id(scope)
       }
 
     _ ->
-      %{id: "system", email: "system@serviceradar", role: :admin}
+      %{
+        id: "system",
+        email: "system@serviceradar",
+        role: :admin,
+        tenant_id: Scope.tenant_id(scope)
+      }
   end
 end
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly points out that the fallback actor is missing a tenant_id, which is important for authorization, and the proposed change correctly adds it.

Medium
Keep selection mode consistent
Suggestion Impact:The toggle_select_all handler was updated to assign selected_devices and also reset select_all_matching to false and total_matching_count to nil, matching the suggested UX consistency improvement.

code diff:

@@ -149,7 +149,11 @@
         MapSet.union(selected, device_uids)
       end
 
-    {:noreply, assign(socket, :selected_devices, updated)}
+    {:noreply,
+     socket
+     |> assign(:selected_devices, updated)
+     |> assign(:select_all_matching, false)
+     |> assign(:total_matching_count, nil)}
   end

Update the toggle_select_all event handler to reset the select_all_matching
state. This ensures that manually toggling the page selection provides clear and
consistent UI feedback.

web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [132-153]

 def handle_event("toggle_select_all", _params, socket) do
   devices = socket.assigns.devices
   selected = socket.assigns.selected_devices
 
   device_uids =
     devices
     |> Enum.filter(&is_map/1)
     |> Enum.map(fn row -> Map.get(row, "uid") || Map.get(row, "id") end)
     |> Enum.filter(&is_binary/1)
     |> MapSet.new()
 
   updated =
     if MapSet.subset?(device_uids, selected) do
       # All current page devices selected, deselect them
       MapSet.difference(selected, device_uids)
     else
       # Select all current page devices
       MapSet.union(selected, device_uids)
     end
 
-  {:noreply, assign(socket, :selected_devices, updated)}
+  {:noreply,
+   socket
+   |> assign(:selected_devices, updated)
+   |> assign(:select_all_matching, false)
+   |> assign(:total_matching_count, nil)}
 end
Suggestion importance[1-10]: 6

__

Why: This is a valid UX improvement that makes the bulk selection feature more consistent and less confusing for the user, aligning its behavior with toggle_device_select.

Low
Security
Enforce tenant context for reads
Suggestion Impact:The commit refactored load_sweep_results/2 to call get_sweep_tenant/1 inside a case expression, returning nil when tenant is nil and only building/executing the Ash query when a tenant is present, matching the suggested multitenancy guard.

code diff:

   defp load_sweep_results(scope, ip) when is_binary(ip) do
     actor = build_sweep_actor(scope)
-    tenant = get_sweep_tenant(scope)
-
-    require Ash.Query
-
-    query =
-      SweepHostResult
-      |> Ash.Query.for_read(:by_ip, %{ip: ip}, actor: actor, tenant: tenant)
-      |> Ash.Query.sort(inserted_at: :desc)
-      |> Ash.Query.limit(10)
-
-    case Ash.read(query, authorize?: true) do
-      {:ok, results} when results != [] ->
-        %{results: results, total: length(results)}
-
-      _ ->
+
+    case get_sweep_tenant(scope) do
+      nil ->
         nil
+
+      tenant ->
+        require Ash.Query
+
+        query =
+          SweepHostResult
+          |> Ash.Query.for_read(:by_ip, %{ip: ip}, actor: actor, tenant: tenant)
+          |> Ash.Query.sort(inserted_at: :desc)
+          |> Ash.Query.limit(10)
+
+        case Ash.read(query, authorize?: true) do
+          {:ok, results} when results != [] ->
+            %{results: results, total: length(results)}
+
+          _ ->
+            nil
+        end
     end
   end

Add a guard to ensure a tenant is present before executing the Ash query in
load_sweep_results/2 to prevent potential multitenancy bypass.

web-ng/lib/serviceradar_web_ng_web/live/device_live/show.ex [1846-1865]

 defp load_sweep_results(scope, ip) when is_binary(ip) do
   actor = build_sweep_actor(scope)
-  tenant = get_sweep_tenant(scope)
 
-  require Ash.Query
+  case get_sweep_tenant(scope) do
+    nil ->
+      nil
 
-  query =
-    SweepHostResult
-    |> Ash.Query.for_read(:by_ip, %{ip: ip}, actor: actor, tenant: tenant)
-    |> Ash.Query.sort(inserted_at: :desc)
-    |> Ash.Query.limit(10)
+    tenant ->
+      require Ash.Query
 
-  case Ash.read(query, authorize?: true) do
-    {:ok, results} when results != [] ->
-      %{results: results, total: length(results)}
+      query =
+        SweepHostResult
+        |> Ash.Query.for_read(:by_ip, %{ip: ip}, actor: actor, tenant: tenant)
+        |> Ash.Query.sort(inserted_at: :desc)
+        |> Ash.Query.limit(10)
 
-    _ ->
-      nil
+      case Ash.read(query, authorize?: true) do
+        {:ok, results} when results != [] ->
+          %{results: results, total: length(results)}
+
+        _ ->
+          nil
+      end
   end
 end
Suggestion importance[1-10]: 8

__

Why: This is a valid security concern, as passing a nil tenant to an Ash query could bypass multitenancy checks, and the proposed change correctly adds a guard to prevent this.

Medium
General
Avoid quadratic list concatenation
Suggestion Impact:The commit changed the accumulator updates from `uids ++ acc` to `[uids | acc]` in both pagination branches and updated `finalize_uid_acc/1` to `List.flatten()` after reversing, matching the suggested performance optimization.

code diff:

@@ -1203,7 +1215,7 @@
           |> Enum.filter(&is_binary/1)
 
         next_cursor = Map.get(pagination, "next_cursor")
-        new_acc = uids ++ acc
+        new_acc = [uids | acc]
 
         if is_binary(next_cursor) do
           fetch_all_uids_paginated(srql_module, scope, query, next_cursor, new_acc)
@@ -1218,7 +1230,7 @@
           |> Enum.map(fn row -> Map.get(row, "uid") || Map.get(row, "id") end)
           |> Enum.filter(&is_binary/1)
 
-        finalize_uid_acc(uids ++ acc)
+        finalize_uid_acc([uids | acc])
 
       _ ->
         finalize_uid_acc(acc)
@@ -1228,6 +1240,7 @@
   defp finalize_uid_acc(acc) do
     acc
     |> Enum.reverse()
+    |> List.flatten()
     |> Enum.uniq()

Optimize the fetch_all_uids_paginated function by prepending UIDs to the
accumulator instead of using list concatenation. This improves performance and
reduces memory usage when fetching all matching device UIDs.

web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [1193-1232]

 defp fetch_all_uids_paginated(srql_module, scope, query, cursor, acc) do
   full_query = "in:devices #{query} limit:1000"
   opts = if cursor, do: %{scope: scope, cursor: cursor}, else: %{scope: scope}
 
   case srql_module.query(full_query, opts) do
     {:ok, %{"results" => results, "pagination" => pagination}} when is_list(results) ->
       uids =
         results
         |> Enum.filter(&is_map/1)
         |> Enum.map(fn row -> Map.get(row, "uid") || Map.get(row, "id") end)
         |> Enum.filter(&is_binary/1)
 
       next_cursor = Map.get(pagination, "next_cursor")
-      new_acc = uids ++ acc
+      new_acc = [uids | acc]
 
       if is_binary(next_cursor) do
         fetch_all_uids_paginated(srql_module, scope, query, next_cursor, new_acc)
       else
         finalize_uid_acc(new_acc)
       end
 
     {:ok, %{"results" => results}} when is_list(results) ->
       uids =
         results
         |> Enum.filter(&is_map/1)
         |> Enum.map(fn row -> Map.get(row, "uid") || Map.get(row, "id") end)
         |> Enum.filter(&is_binary/1)
 
-      finalize_uid_acc(uids ++ acc)
+      finalize_uid_acc([uids | acc])
 
     _ ->
       finalize_uid_acc(acc)
   end
 end
 
 defp finalize_uid_acc(acc) do
   acc
   |> Enum.reverse()
+  |> List.flatten()
   |> Enum.uniq()
 end
Suggestion importance[1-10]: 7

__

Why: This is a valid performance optimization that avoids quadratic complexity in list concatenation during pagination, which is important for the "select all" feature to work efficiently with large datasets.

Medium
Suggestions up to commit 8f1186c
CategorySuggestion                                                                                                                                    Impact
Security
Validate fields/operators before SRQL

Harden the SRQL generation by whitelisting field and operator values in
criteria_clause/2 to prevent potential injection vulnerabilities.

web-ng/lib/serviceradar_web_ng_web/live/settings/networks_live/index.ex [2280-2296]

 defp criteria_to_srql_query(criteria) when is_map(criteria) do
   clauses =
     criteria
     |> Enum.map(fn {field, spec} -> criteria_clause(field, spec) end)
     |> Enum.reject(fn clause -> is_nil(clause) or clause == "" end)
 
   Enum.join(clauses, " ")
 end
 
 defp criteria_clause(field, spec) when is_map(spec) do
-  case Map.to_list(spec) do
-    [{operator, value}] -> clause_for_operator(field, operator, value)
-    _ -> ""
+  if field_known?(field) do
+    case Map.to_list(spec) do
+      [{operator, value}] ->
+        if operator_known?(field, operator) do
+          clause_for_operator(field, operator, value)
+        else
+          ""
+        end
+
+      _ ->
+        ""
+    end
+  else
+    ""
   end
 end
 
 defp criteria_clause(_field, _spec), do: ""
Suggestion importance[1-10]: 9

__

Why: This is a critical security hardening suggestion that prevents potential SRQL injection by validating user-provided field and operator values against a known list before constructing the query.

High
Possible issue
Normalize ports before database insert
Suggestion Impact:The commit implements normalization of the open ports field by introducing an open_ports_raw variable and transforming it via List.wrap, Integer.parse for strings, rejecting nils, filtering to valid port range, and applying uniq/sort prior to insertion, preventing batch insert failures from bad data.

code diff:

         # Parse open ports
-        open_ports = result["tcp_ports_open"] || result["tcpPortsOpen"] || []
+        open_ports_raw = result["tcp_ports_open"] || result["tcpPortsOpen"] || []
+
+        open_ports =
+          open_ports_raw
+          |> List.wrap()
+          |> Enum.map(fn
+            port when is_integer(port) ->
+              port
+
+            port when is_binary(port) ->
+              case Integer.parse(port) do
+                {parsed, ""} -> parsed
+                _ -> nil
+              end
+
+            _ ->
+              nil
+          end)
+          |> Enum.reject(&is_nil/1)
+          |> Enum.filter(&(&1 >= 1 and &1 <= 65_535))
+          |> Enum.uniq()
+          |> Enum.sort()
 

Normalize the open_ports list by converting all values to integers and filtering
out invalid entries before database insertion to prevent batch insertion
failures.

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex [307-308]

 # Parse open ports
-open_ports = result["tcp_ports_open"] || result["tcpPortsOpen"] || []
+open_ports_raw = result["tcp_ports_open"] || result["tcpPortsOpen"] || []
 
+open_ports =
+  open_ports_raw
+  |> List.wrap()
+  |> Enum.map(fn
+    p when is_integer(p) -> p
+    p when is_binary(p) ->
+      case Integer.parse(p) do
+        {n, ""} -> n
+        _ -> nil
+      end
+
+    _ ->
+      nil
+  end)
+  |> Enum.reject(&is_nil/1)
+  |> Enum.filter(&(&1 >= 1 and &1 <= 65_535))
+  |> Enum.uniq()
+  |> Enum.sort()
+
Suggestion importance[1-10]: 8

__

Why: This suggestion correctly identifies a potential data type mismatch that could cause Repo.insert_all to fail for an entire batch, improving the robustness of the data ingestion pipeline.

Medium
Prevent unbounded flush looping
Suggestion Impact:The commit changed handle_info(:flush) to call flush_queue/2 once, then either immediately re-sends :flush when more items remain or schedules the normal interval flush otherwise. It also removed the recursive flush_all_batches/2 helper, eliminating unbounded recursion in a single handle_info call.

code diff:

   def handle_info(:flush, state) do
-    {state, _more?} = flush_all_batches(state, @default_flush_batch_size)
-    schedule_flush(state.flush_interval_ms)
+    {state, more?} = flush_queue(state, @default_flush_batch_size)
+
+    if more? do
+      Process.send_after(self(), :flush, 0)
+    else
+      schedule_flush(state.flush_interval_ms)
+    end
+
     {:noreply, state}
   end
 
@@ -109,16 +115,6 @@
     end
   end
 
-  defp flush_all_batches(state, batch_size) do
-    {state, more?} = flush_queue(state, batch_size)
-
-    if more? do
-      flush_all_batches(state, batch_size)
-    else
-      {state, false}
-    end
-  end
-
   defp schedule_flush(flush_interval_ms) do
     Process.send_after(self(), :flush, max(flush_interval_ms, 1_000))
   end

Modify the flush logic to process only one batch per :flush event and schedule a
subsequent flush if more items remain, instead of using unbounded recursion
within a single handle_info call.

elixir/serviceradar_agent_gateway/lib/serviceradar_agent_gateway/status_buffer.ex [76-80]

 def handle_info(:flush, state) do
-  {state, _more?} = flush_all_batches(state, @default_flush_batch_size)
-  schedule_flush(state.flush_interval_ms)
+  {state, more?} = flush_queue(state, @default_flush_batch_size)
+
+  if more? do
+    Process.send_after(self(), :flush, 0)
+  else
+    schedule_flush(state.flush_interval_ms)
+  end
+
   {:noreply, state}
 end
Suggestion importance[1-10]: 8

__

Why: This suggestion correctly identifies a GenServer anti-pattern that could block the process and cause timeouts, and the proposed fix is the standard, correct way to handle batch work without compromising responsiveness.

Medium
Prevent response-time parsing crashes
Suggestion Impact:The commit implemented defensive parsing for icmp response time by introducing a raw value, converting strings via Integer.parse/1, and only performing div/2 when an integer is present, preventing potential runtime crashes. (It also added similar defensive parsing for open ports.)

code diff:

         # Parse response time (convert ns to ms)
-        response_time_ns = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"]
-        response_time_ms = if response_time_ns, do: div(response_time_ns, 1_000_000)
+        response_time_ns_raw = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"]
+
+        response_time_ns =
+          cond do
+            is_integer(response_time_ns_raw) ->
+              response_time_ns_raw
+
+            is_binary(response_time_ns_raw) ->
+              case Integer.parse(response_time_ns_raw) do
+                {parsed, ""} -> parsed
+                _ -> nil
+              end
+
+            true ->
+              nil
+          end
+
+        response_time_ms =
+          if is_integer(response_time_ns),
+            do: div(response_time_ns, 1_000_000),
+            else: nil

Defensively parse the response_time_ns value, handling non-integer types like
strings, to prevent crashes during the division operation and ensure batch
processing is not aborted.

elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex [303-305]

 # Parse response time (convert ns to ms)
-response_time_ns = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"]
-response_time_ms = if response_time_ns, do: div(response_time_ns, 1_000_000)
+response_time_ns_raw = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"]
 
+response_time_ns =
+  cond do
+    is_integer(response_time_ns_raw) ->
+      response_time_ns_raw
+
+    is_binary(response_time_ns_raw) ->
+      case Integer.parse(response_time_ns_raw) do
+        {n, ""} -> n
+        _ -> nil
+      end
+
+    true ->
+      nil
+  end
+
+response_time_ms = if is_integer(response_time_ns), do: div(response_time_ns, 1_000_000), else: nil
+
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies that div/2 will crash if response_time_ns is not an integer, which is plausible for JSON data, and this would halt the entire batch processing.

Medium
Guard ETS access when unavailable
Suggestion Impact:The commit wraps all the specified public ETS access functions (put, invalidate, invalidate_tenant, invalidate_key, clear_all, stats) with an :ets.whereis(@table_name) == :undefined check, returning :ok (or zeroed stats) when the table is missing—matching the suggested robustness fix.

code diff:

@@ -92,11 +92,15 @@
   """
   @spec put(String.t(), atom(), String.t(), String.t() | nil, map()) :: :ok
   def put(tenant_id, config_type, partition, agent_id, entry, ttl_ms \\ @default_ttl_ms) do
-    key = cache_key(tenant_id, config_type, partition, agent_id)
-    expires_at = System.monotonic_time(:millisecond) + ttl_ms
...
Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/2263#issuecomment-3736735024 Original created: 2026-01-12T03:09:12Z --- ## PR Code Suggestions ✨ <!-- d7444e5 --> Latest suggestions up to d7444e5 <table><thead><tr><td><strong>Category</strong></td><td align=left><strong>Suggestion&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </strong></td><td align=center><strong>Impact</strong></td></tr><tbody><tr><td rowspan=7>Incremental <sup><a href='https://qodo-merge-docs.qodo.ai/core-abilities/incremental_update/'>[*]</a></sup></td> <td> <details><summary>Fix misplaced module endings</summary> ___ **Correct a syntax error by moving the <code>get_sweep_tenant/1</code> function inside the <br><code>ServiceRadarWebNGWeb.DeviceLive.Show</code> module and removing a misplaced <code>end</code> <br>keyword.** [web-ng/lib/serviceradar_web_ng_web/live/device_live/show.ex [1862-1868]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-92d8e0af57d2f65dfb8562e896dbea17ef9f47074ffd14f58261decc76f80c24R1862-R1868) ```diff +case Ash.read(query, authorize?: true) do + {:ok, results} when results != [] -> + %{results: results, total: length(results)} + _ -> + nil +end ``` `[To ensure code accuracy, apply this suggestion manually]` <details><summary>Suggestion importance[1-10]: 9</summary> __ Why: The suggestion correctly identifies a critical syntax error where a misplaced `end` keyword would cause a compilation failure by defining `get_sweep_tenant/1` outside the `ServiceRadarWebNGWeb.DeviceLive.Show` module. </details></details></td><td align=center>High </td></tr><tr><td> <details><summary>Fix mismatched block endings</summary> ___ **Remove the extra <code>end</code> from the <code>emit_forward_metrics/4</code> function to fix a syntax <br>error that prevents compilation.** [elixir/serviceradar_agent_gateway/lib/serviceradar_agent_gateway/status_processor.ex [255-278]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-2d04050dff3ba2cc8153559e33a892f5f421982bf6dcbda7172857b3bf398a02R255-R278) ```diff defp emit_forward_metrics(result, status, from_buffer, started_at) do if should_buffer?(status) do duration_ms = System.monotonic_time() |> Kernel.-(started_at) |> System.convert_time_unit(:native, :millisecond) :telemetry.execute( [:serviceradar, :agent_gateway, :results, :forward], %{count: 1, duration_ms: duration_ms}, %{ result: result, from_buffer: from_buffer, service_type: status[:service_type], service_name: status[:service_name], agent_id: status[:agent_id], gateway_id: status[:gateway_id], tenant_slug: status[:tenant_slug], partition: status[:partition] } ) end - end end ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=1 --> <details><summary>Suggestion importance[1-10]: 9</summary> __ Why: The suggestion correctly identifies a syntax error (an extra `end`) that would prevent the code from compiling, which is a critical issue. </details></details></td><td align=center>High </td></tr><tr><td> <details><summary>Add pagination loop termination guard</summary> ___ **Add guards to the <code>fetch_all_uids_paginated</code> function to prevent infinite <br>recursion by checking if the <code>next_cursor</code> has changed and if the last fetch <br>returned any results.** [web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [1218-1224]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-261a01f4876e5984e1d9e9b38a3540675dfb0272abc30e6bdb2a4fa610353cc7R1218-R1224) ```diff next_cursor = Map.get(pagination, "next_cursor") new_acc = [uids | acc] -if is_binary(next_cursor) do +if is_binary(next_cursor) and next_cursor != cursor and uids != [] do fetch_all_uids_paginated(srql_module, scope, query, next_cursor, new_acc) else finalize_uid_acc(new_acc) end ``` `[To ensure code accuracy, apply this suggestion manually]` <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: This suggestion addresses a critical potential issue where a faulty API response during pagination could cause an infinite loop, hanging the LiveView process. Adding checks for an empty result set and a non-changing cursor makes the pagination logic significantly more resilient. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Ensure safe JSONB parameter casting</summary> ___ **Explicitly encode the <code>new_tags</code> map to a JSON string before passing it to the SQL <br><code>fragment</code> to ensure reliable casting to <code>jsonb</code> in the database.** [web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [281-291]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-261a01f4876e5984e1d9e9b38a3540675dfb0272abc30e6bdb2a4fa610353cc7R281-R291) ```diff Ash.bulk_update(query, :update, %{}, scope: scope, return_records?: false, return_errors?: true, atomic_update: %{ tags: expr( - fragment("coalesce(?, '{}'::jsonb) || (?::jsonb)", ^ref(:tags), ^new_tags) + fragment( + "coalesce(?, '{}'::jsonb) || (?::jsonb)", + ^ref(:tags), + ^Jason.encode!(new_tags) + ) ) } ) ``` `[To ensure code accuracy, apply this suggestion manually]` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly points out that implicit casting of a map to `jsonb` can be unreliable. Explicitly encoding the map to a JSON string with `Jason.encode!` makes the database operation more robust and less prone to driver-specific or version-specific issues. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Cap stored port list size</summary> ___ **Limit the number of <code>open_ports</code> stored in the database to a reasonable maximum <br>(e.g., 1024) to prevent performance issues and database bloat.** [elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex [329-348]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-06f4b3bf56e5f1d122b25040ea7f321125d6cae20606811dc0b2a0ddc7a66226R329-R348) ```diff open_ports = open_ports_raw |> List.wrap() |> Enum.map(fn port when is_integer(port) -> port port when is_binary(port) -> case Integer.parse(port) do {parsed, ""} -> parsed _ -> nil end _ -> nil end) |> Enum.reject(&is_nil/1) |> Enum.filter(&(&1 >= 1 and &1 <= 65_535)) |> Enum.uniq() |> Enum.sort() + |> Enum.take(1_024) ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=4 --> <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: This is a valuable performance and scalability improvement, preventing potential database bloat and slow queries by capping the number of stored open ports, which could otherwise be very large. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Prevent tight-loop batch flushing</summary> ___ **In the <code>handle_info(:flush, ...)</code> function, change the <code>0</code>ms delay to a small <br>positive value (e.g., <code>10</code>) when scheduling the next flush to prevent a tight loop <br>and improve process responsiveness.** [elixir/serviceradar_agent_gateway/lib/serviceradar_agent_gateway/status_buffer.ex [79-83]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-557fa02eff90dabc3757fea28b49208bb07d03698bd485219bc7301058dbb2c4R79-R83) ```diff if more? do - Process.send_after(self(), :flush, 0) + Process.send_after(self(), :flush, 10) else schedule_flush(state.flush_interval_ms) end ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=5 --> <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly identifies a potential for a tight loop that could starve the `GenServer` process. Introducing a small delay is a good practice to ensure the process remains responsive. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Use consistent progress map keys</summary> ___ **Refactor the key access for the <code>@execution_progress</code> map into a variable to <br>improve readability and ensure consistent keying.** [web-ng/lib/serviceradar_web_ng_web/live/settings/networks_live/index.ex [952-954]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-b2127e71582033bc6dfd2d7397f56bf43c1f7c613defffc504b3d8ee1e7406c4R952-R954) ```diff +progress_key = execution.execution_id || execution.id + progress={ - Map.get(@execution_progress, execution.execution_id || execution.id) + Map.get(@execution_progress, progress_key) } ``` `[To ensure code accuracy, apply this suggestion manually]` <details><summary>Suggestion importance[1-10]: 2</summary> __ Why: The suggestion correctly identifies a potential key mismatch but the `existing_code` already attempts to handle it with `execution.execution_id || execution.id`. The proposed change only refactors this into a variable, offering a minor readability improvement without altering the logic. </details></details></td><td align=center>Low </td></tr><tr><td rowspan=6>Possible issue</td> <td> <details><summary>Prefer specific records deterministically</summary> ___ **Modify the <code>:for_agent</code> read action to deterministically select the most specific <br>configuration by adding sorting and a limit, ensuring an agent-specific config <br>is preferred over a partition-default one.** [elixir/serviceradar_core/lib/serviceradar/agent_config/config_instance.ex [69-79]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-37fd443381f2242517ed060fccbc634e3f2500b06a8702393ad8e7e034caece2R69-R79) ```diff read :for_agent do argument :config_type, :atom, allow_nil?: false argument :partition, :string, allow_nil?: false argument :agent_id, :string, allow_nil?: true filter expr( config_type == ^arg(:config_type) and partition == ^arg(:partition) and (is_nil(^arg(:agent_id)) or agent_id == ^arg(:agent_id) or is_nil(agent_id)) ) + + prepare build(sort: [agent_id: :desc_nils_last]) + prepare build(limit: 1) end ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=7 --> <details><summary>Suggestion importance[1-10]: 9</summary> __ Why: This suggestion identifies a critical logical flaw where a non-deterministic query could lead to an agent receiving a generic configuration instead of its specific one, which could cause incorrect behavior. </details></details></td><td align=center>High </td></tr><tr><td> <details><summary>Support IPv6 CIDR matching</summary> ___ **Add support for IPv6 CIDR matching in <code>ip_matches_cidr?/3</code> to prevent incorrect <br>targeting for IPv6 devices.** [elixir/serviceradar_core/lib/serviceradar/sweep_jobs/target_criteria.ex [320-329]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-a71ead6955e751bd5d27c6d81b1723147aebfbe4c80deedc20afbaed02afc062R320-R329) ```diff defp ip_matches_cidr?(ip, network, mask) when tuple_size(ip) == 4 and tuple_size(network) == 4 and mask >= 0 and mask <= 32 do # IPv4 ip_int = ipv4_to_int(ip) network_int = ipv4_to_int(network) mask_bits = bsl(0xFFFFFFFF, 32 - mask) &&& 0xFFFFFFFF (ip_int &&& mask_bits) == (network_int &&& mask_bits) end + defp ip_matches_cidr?(ip, network, mask) + when tuple_size(ip) == 8 and tuple_size(network) == 8 and mask >= 0 and mask <= 128 do + # IPv6 (128-bit) + ip_int = ipv6_to_int(ip) + network_int = ipv6_to_int(network) + mask_bits = bsl((1 <<< 128) - 1, 128 - mask) + (ip_int &&& mask_bits) == (network_int &&& mask_bits) + end + defp ip_matches_cidr?(_ip, _network, _mask), do: false + defp ipv6_to_int({a, b, c, d, e, f, g, h}) do + bsl(a, 112) + bsl(b, 96) + bsl(c, 80) + bsl(d, 64) + + bsl(e, 48) + bsl(f, 32) + bsl(g, 16) + h + end + ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=8 --> <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: The suggestion identifies a significant bug where IPv6 addresses would silently fail to match CIDR criteria, leading to incorrect sweep targeting. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Parse SRQL count result</summary> ___ **In <code>get_matching_device_count</code>, handle cases where the SRQL count is a string or <br>float, not just an integer, by parsing it to prevent incorrect <code>nil</code> results.** [web-ng/lib/serviceradar_web_ng_web/live/settings/networks_live/index.ex [2294-2309]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-b2127e71582033bc6dfd2d7397f56bf43c1f7c613defffc504b3d8ee1e7406c4R2294-R2309) ```diff defp get_matching_device_count(scope, criteria) do srql_query = criteria_to_srql_query(criteria) if srql_query == "" do nil else srql_module = Application.get_env(:serviceradar_web_ng, :srql_module, ServiceRadarWebNG.SRQL) full_query = ~s|in:devices #{srql_query} stats:"count() as total"| case srql_module.query(full_query, %{scope: scope}) do - {:ok, %{"results" => [%{"total" => count} | _]}} when is_integer(count) -> count - _ -> nil + {:ok, %{"results" => [%{"total" => count} | _]}} -> + cond do + is_integer(count) -> + count + + is_float(count) -> + trunc(count) + + is_binary(count) -> + case Integer.parse(count) do + {int, ""} -> int + _ -> nil + end + + true -> + nil + end + + _ -> + nil end end end ``` `[To ensure code accuracy, apply this suggestion manually]` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: This suggestion correctly points out that relying on the `count` being an integer is too strict, as JSON decoding can produce strings or floats. The proposed change makes the function more robust by handling these other valid numeric types. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Stop parsing tenant IDs</summary> ___ **Use the <code>tenant_id</code> variable directly for PubSub broadcasts instead of parsing it <br>from the <code>tenant_schema</code> string.** [elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex [102-103]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-06f4b3bf56e5f1d122b25040ea7f321125d6cae20606811dc0b2a0ddc7a66226R102-R103) ```diff - # Extract tenant_id for PubSub broadcasts - broadcast_tenant_id = extract_tenant_id_from_schema(tenant_schema) + # Use provided tenant_id for PubSub broadcasts (don't parse schema names) + broadcast_tenant_id = tenant_id ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=10 --> <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly points out that parsing the schema name to get the `tenant_id` is brittle and a less robust approach than using the already available `tenant_id` variable. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Use safe Ecto table prefixes</summary> ___ **Refactor the Ecto query to use the <code>prefix</code> option for dynamic schemas instead of <br>string interpolation, and use <code>table/1</code> to specify the table name.** [elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_data_cleanup_worker.ex [118-143]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-b8660522af1e1ad3ba8da56f754567fdae03d09fa9e18749a3a49435893073feR118-R143) ```diff base_query = - from(r in {schema <> "." <> table, resource}, + from(r in table(^table), + prefix: schema, where: field(r, ^timestamp_field) < ^cutoff, order_by: [asc: field(r, ^timestamp_field)], - select: r.id, + select: field(r, :id), limit: ^batch_size ) ... delete_query = - from(r in {schema <> "." <> table, resource}, - where: r.id in ^ids + from(r in table(^table), + prefix: schema, + where: field(r, :id) in ^ids ) ``` `[To ensure code accuracy, apply this suggestion manually]` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly identifies an unsafe and non-standard Ecto query pattern and proposes using the proper `prefix` option, which improves code robustness, security, and adherence to best practices. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Reset bulk selection state</summary> ___ **In the <code>clear_selection</code> event handler, reset the <code>select_all_matching</code> and <br><code>total_matching_count</code> assigns, in addition to <code>selected_devices</code>, to ensure the <br>bulk selection state is fully cleared.** [web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [159-161]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-261a01f4876e5984e1d9e9b38a3540675dfb0272abc30e6bdb2a4fa610353cc7R159-R161) ```diff def handle_event("clear_selection", _params, socket) do - {:noreply, assign(socket, :selected_devices, MapSet.new())} + {:noreply, + socket + |> assign(:selected_devices, MapSet.new()) + |> assign(:select_all_matching, false) + |> assign(:total_matching_count, nil)} end ``` `[To ensure code accuracy, apply this suggestion manually]` <details><summary>Suggestion importance[1-10]: 6</summary> __ Why: The suggestion correctly identifies a state management bug where clearing the selection does not reset the "select all" state, leading to an inconsistent UI for the new bulk actions feature. </details></details></td><td align=center>Low </td></tr> <tr><td align="center" colspan="2"> - [ ] More <!-- /improve --more_suggestions=true --> </td><td></td></tr></tbody></table> ___ #### Previous suggestions <details><summary>✅ Suggestions up to commit 8f1186c</summary> <br><table><thead><tr><td><strong>Category</strong></td><td align=left><strong>Suggestion&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </strong></td><td align=center><strong>Impact</strong></td></tr><tbody><tr><td rowspan=7>Incremental <sup><a href='https://qodo-merge-docs.qodo.ai/core-abilities/incremental_update/'>[*]</a></sup></td> <td> <details><summary>Prevent runtime JSONB cast failures<!-- not_implemented --></summary> ___ **Remove the explicit <code>::jsonb</code> cast on the <code>^new_tags</code> parameter within the <code>fragment</code>. <br>The database layer already handles map encoding, and this cast could cause <br>runtime errors.** [web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [271-276]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-261a01f4876e5984e1d9e9b38a3540675dfb0272abc30e6bdb2a4fa610353cc7R271-R276) ```diff atomic_update: %{ tags: expr( - fragment("coalesce(?, '{}'::jsonb) || (?::jsonb)", ref(:tags), ^new_tags) + fragment("coalesce(?, '{}'::jsonb) || ?", ref(:tags), ^new_tags) ) } ``` <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: This suggestion correctly identifies a potential runtime error by removing an explicit `jsonb` cast on a parameter that is already a map, making the database operation more robust. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>✅ <s>Resolve tenant correctly for history</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The change stopped reading tenant_id from changeset attributes and instead uses the tenant from the changeset context (changeset.tenant). It also adds a guard that errors when tenant is nil and passes the resolved tenant into version creation, aligning with the goal of correctly resolving tenant for history creation. code diff: ```diff - tenant_id = Ash.Changeset.get_attribute(changeset, :tenant_id) + tenant = changeset.tenant # Extract actor info if available {actor_id, actor_email} = extract_actor_info(context) - # Create the version record - version_attrs = %{ - config_instance_id: config_instance_id, - version: current_version, - compiled_config: current_config, - content_hash: current_hash, - source_ids: current_source_ids, - actor_id: actor_id, - actor_email: actor_email, - change_reason: "Config updated" - } + if is_nil(tenant) do + Ash.Changeset.add_error(changeset, + field: :tenant_id, + message: "tenant context is required to record config history" + ) + else + # Create the version record + version_attrs = %{ + config_instance_id: config_instance_id, + version: current_version, + compiled_config: current_config, + content_hash: current_hash, + source_ids: current_source_ids, + actor_id: actor_id, + actor_email: actor_email, + change_reason: "Config updated" + } - case create_version(version_attrs, tenant_id) do - {:ok, _version} -> - changeset + case create_version(version_attrs, tenant, context) do + {:ok, _version} -> + changeset - {:error, error} -> - Logger.warning("Failed to create config version history: #{inspect(error)}") - # Don't fail the main operation if history creation fails - changeset + {:error, error} -> + Logger.warning("Failed to create config version history: #{inspect(error)}") + # Don't fail the main operation if history creation fails + changeset + end end end @@ -64,9 +71,11 @@ end end - defp create_version(attrs, tenant_id) do + defp create_version(attrs, tenant, context) do + actor = Map.get(context, :actor) + ServiceRadar.AgentConfig.ConfigVersion |> Ash.Changeset.for_create(:create, attrs) - |> Ash.create(authorize?: false, tenant: tenant_id) + |> Ash.create(authorize?: false, tenant: tenant, actor: actor) end ``` </details> ___ **Correctly resolve the <code>tenant_id</code> by reading it from the existing record data or <br>the context, instead of from the changeset attributes where it may be missing <br>during an update.** [elixir/serviceradar_core/lib/serviceradar/agent_config/changes/create_version_history.ex [26-31]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-edf276f4ea9f73468d843cc8db84191727a3e37c3e964146c29419e91f120d50R26-R31) ```diff config_instance_id = Ash.Changeset.get_data(changeset, :id) current_version = Ash.Changeset.get_data(changeset, :version) || 0 current_config = Ash.Changeset.get_data(changeset, :compiled_config) || %{} current_hash = Ash.Changeset.get_data(changeset, :content_hash) || "" current_source_ids = Ash.Changeset.get_data(changeset, :source_ids) || [] -tenant_id = Ash.Changeset.get_attribute(changeset, :tenant_id) +tenant_id = Ash.Changeset.get_data(changeset, :tenant_id) || Map.get(context, :tenant) ``` `[Suggestion processed]` <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: This is a valid and important bug fix that prevents potential data corruption or loss of audit history in a multi-tenant system by ensuring the `tenant_id` is correctly resolved. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>✅ <s>Fix progress lookup key mismatch</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>Updated the progress lookup to use `execution.execution_id || execution.id` instead of `Map.get(execution, :execution_id) || execution.id`, aligning the key access with the expected `execution_id` field usage. code diff: ```diff @@ -916,7 +950,7 @@ execution={execution} group={Map.get(@groups_map, execution.sweep_group_id)} progress={ - Map.get(@execution_progress, Map.get(execution, :execution_id) || execution.id) + Map.get(@execution_progress, execution.execution_id || execution.id) } ``` </details> ___ **Correct the progress lookup key to consistently use <code>execution.execution_id</code>. The <br>current fallback to <code>execution.id</code> may cause progress updates to fail if the <code>id</code> <br>and <code>execution_id</code> differ.** [web-ng/lib/serviceradar_web_ng_web/live/settings/networks_live/index.ex [918-920]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-b2127e71582033bc6dfd2d7397f56bf43c1f7c613defffc504b3d8ee1e7406c4R918-R920) ```diff progress={ - Map.get(@execution_progress, Map.get(execution, :execution_id) || execution.id) + Map.get(@execution_progress, execution.execution_id || execution.id) } ``` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: This is a valid concern as the progress map is keyed by `execution_id`, and falling back to `execution.id` would fail if they differ, preventing real-time progress updates. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>✅ <s>Prevent runtime crashes on map access</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The code mapping device records to canonical_device_id was changed from dot-access to Map.get(&1, :canonical_device_id), matching the suggestion and preventing potential KeyError crashes. code diff: ```diff (available_ips ++ unavailable_ips) |> Enum.map(&Map.get(device_map, &1)) |> Enum.reject(&is_nil/1) - |> Enum.map(& &1.canonical_device_id) + |> Enum.map(&Map.get(&1, :canonical_device_id)) |> Enum.reject(&is_nil/1) ``` </details> ___ **Replace unsafe dot-access on a map with <code>Map.get/2</code> to safely retrieve the <br><code>:canonical_device_id</code> and prevent potential runtime crashes if the key is absent.** [elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex [394-400]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-06f4b3bf56e5f1d122b25040ea7f321125d6cae20606811dc0b2a0ddc7a66226R394-R400) ```diff all_device_uids = (available_ips ++ unavailable_ips) |> Enum.map(&Map.get(device_map, &1)) |> Enum.reject(&is_nil/1) - |> Enum.map(& &1.canonical_device_id) + |> Enum.map(&Map.get(&1, :canonical_device_id)) |> Enum.reject(&is_nil/1) |> Enum.uniq() ``` `[Suggestion processed]` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: This suggestion correctly identifies that using dot-access (`&1.canonical_device_id`) is unsafe for maps and can cause runtime errors if the key is missing, proposing a switch to `Map.get/2` which improves the code's robustness. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>✅ <s>Emit metrics for all results</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>Changed emit_forward_metrics/4 to gate telemetry emission with should_buffer?(status) instead of checking only status[:source] == "results", ensuring metrics are emitted for both string and atom sources. code diff: ```diff defp emit_forward_metrics(result, status, from_buffer, started_at) do - if status[:source] == "results" do + if should_buffer?(status) do ``` </details> ___ **Update the condition for emitting metrics to match the buffering logic, ensuring <br>metrics are captured for both string and atom versions of the <code>results</code> source.** [elixir/serviceradar_agent_gateway/lib/serviceradar_agent_gateway/status_processor.ex [251-277]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-2d04050dff3ba2cc8153559e33a892f5f421982bf6dcbda7172857b3bf398a02R251-R277) ```diff defp should_buffer?(status) do status[:source] in ["results", :results] end defp emit_forward_metrics(result, status, from_buffer, started_at) do - if status[:source] == "results" do + if should_buffer?(status) do duration_ms = System.monotonic_time() |> Kernel.-(started_at) |> System.convert_time_unit(:native, :millisecond) :telemetry.execute( [:serviceradar, :agent_gateway, :results, :forward], %{count: 1, duration_ms: duration_ms}, %{ result: result, from_buffer: from_buffer, service_type: status[:service_type], service_name: status[:service_name], agent_id: status[:agent_id], gateway_id: status[:gateway_id], tenant_slug: status[:tenant_slug], partition: status[:partition] } ) end end ``` `[Suggestion processed]` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly identifies an inconsistency where telemetry metrics would be missed if `status[:source]` is an atom, improving observability by ensuring all relevant events are tracked. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Stabilize count return value</summary> ___ **In <code>get_total_matching_count</code>, return <code>0</code> instead of <code>nil</code> on query failure. This <br>ensures type stability and prevents confusing UI messages like "All nil matching <br>device(s) selected".** [web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [1183-1185]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-261a01f4876e5984e1d9e9b38a3540675dfb0272abc30e6bdb2a4fa610353cc7R1183-R1185) ```diff _ -> - nil + 0 ``` <details><summary>Suggestion importance[1-10]: 6</summary> __ Why: The suggestion correctly points out that returning `nil` can lead to confusing UI output and makes a good case for returning `0` to maintain type stability and improve user experience. </details></details></td><td align=center>Low </td></tr><tr><td> <details><summary>✅ <s>Avoid ETS race crash</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The commit addresses the same general concern (ETS table not existing / being terminated) by adding :ets.whereis(@table_name) == :undefined guards before performing ETS operations (insert/delete/select_delete/info). It does not implement the specific suggested try/rescue around :ets.lookup/2, but it applies a related defensive pattern to avoid crashes when the table is missing. code diff: ```diff + if :ets.whereis(@table_name) == :undefined do + :ok + else + key = cache_key(tenant_id, config_type, partition, agent_id) + expires_at = System.monotonic_time(:millisecond) + ttl_ms + + :ets.insert(@table_name, {key, entry, expires_at}) + :ok + end end @doc """ @@ -104,13 +108,17 @@ """ @spec invalidate(String.t(), atom()) :: :ok def invalidate(tenant_id, config_type) do - # Match all keys for this tenant and config type - match_spec = [ - {{{tenant_id, config_type, :_, :_}, :_, :_}, [], [true]} - ] - - :ets.select_delete(@table_name, match_spec) - :ok + if :ets.whereis(@table_name) == :undefined do + :ok + else + # Match all keys for this tenant and config type + match_spec = [ + {{{tenant_id, config_type, :_, :_}, :_, :_}, [], [true]} + ] + + :ets.select_delete(@table_name, match_spec) + :ok + end end @doc """ @@ -118,12 +126,16 @@ """ @spec invalidate_tenant(String.t()) :: :ok def invalidate_tenant(tenant_id) do - match_spec = [ - {{{tenant_id, :_, :_, :_}, :_, :_}, [], [true]} - ] - - :ets.select_delete(@table_name, match_spec) - :ok + if :ets.whereis(@table_name) == :undefined do + :ok + else + match_spec = [ + {{{tenant_id, :_, :_, :_}, :_, :_}, [], [true]} + ] + + :ets.select_delete(@table_name, match_spec) + :ok + end end @doc """ @@ -131,9 +143,13 @@ """ @spec invalidate_key(String.t(), atom(), String.t(), String.t() | nil) :: :ok def invalidate_key(tenant_id, config_type, partition, agent_id) do - key = cache_key(tenant_id, config_type, partition, agent_id) - :ets.delete(@table_name, key) - :ok + if :ets.whereis(@table_name) == :undefined do + :ok + else + key = cache_key(tenant_id, config_type, partition, agent_id) + :ets.delete(@table_name, key) + :ok + end end @doc """ @@ -141,8 +157,12 @@ """ @spec clear_all() :: :ok def clear_all do - :ets.delete_all_objects(@table_name) - :ok + if :ets.whereis(@table_name) == :undefined do + :ok + else + :ets.delete_all_objects(@table_name) + :ok + end end @doc """ @@ -150,10 +170,14 @@ """ @spec stats() :: map() def stats do - %{ - size: :ets.info(@table_name, :size), - memory_bytes: :ets.info(@table_name, :memory) * :erlang.system_info(:wordsize) - } + if :ets.whereis(@table_name) == :undefined do + %{size: 0, memory_bytes: 0} + else + %{ + size: :ets.info(@table_name, :size), + memory_bytes: :ets.info(@table_name, :memory) * :erlang.system_info(:wordsize) + } + end end ``` </details> ___ **Prevent a potential crash from a race condition by wrapping the <code>:ets.lookup/2</code> <br>call in a <code>try/rescue</code> block to handle cases where the ETS table is terminated <br>unexpectedly.** [elixir/serviceradar_core/lib/serviceradar/agent_config/config_cache.ex [46-64]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-95f0c8640267167409c8af66d33550c2440b1ac5ec810f5e4d6fcd8df6ef8e2fR46-R64) ```diff if :ets.whereis(@table_name) == :undefined do :miss else key = cache_key(tenant_id, config_type, partition, agent_id) - case :ets.lookup(@table_name, key) do - [{^key, entry, expires_at}] -> - if System.monotonic_time(:millisecond) < expires_at do - {:ok, entry} - else - # Expired - delete and return miss - :ets.delete(@table_name, key) + try do + case :ets.lookup(@table_name, key) do + [{^key, entry, expires_at}] -> + if System.monotonic_time(:millisecond) < expires_at do + {:ok, entry} + else + # Expired - delete and return miss + :ets.delete(@table_name, key) + :miss + end + + [] -> :miss - end - - [] -> + end + rescue + ArgumentError -> :miss end end ``` `[Suggestion processed]` <details><summary>Suggestion importance[1-10]: 6</summary> __ Why: The suggestion correctly identifies a race condition with ETS table lookups and proposes a valid fix to improve the code's resilience against crashes, making the cache access more robust. </details></details></td><td align=center>Low </td></tr><tr><td rowspan=5>Possible issue</td> <td> <details><summary>Chunk large bulk updates<!-- not_implemented --></summary> ___ **Refactor <code>update_tags_for_uids</code> to process updates in chunks. This prevents <br>potential database errors when applying bulk actions to a large number of <br>selected devices.** [web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [258-297]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-261a01f4876e5984e1d9e9b38a3540675dfb0272abc30e6bdb2a4fa610353cc7R258-R297) ```diff defp update_tags_for_uids(scope, uids, new_tags) do - query = - Device - |> Ash.Query.for_read(:read, %{}) - |> Ash.Query.filter(uid in ^uids) + uids + |> Enum.chunk_every(1000) + |> Enum.reduce_while({:ok, 0}, fn uid_chunk, {:ok, acc_count} -> + query = + Device + |> Ash.Query.for_read(:read, %{}) + |> Ash.Query.filter(uid in ^uid_chunk) - case Ash.count(query, scope: scope) do - {:ok, existing_count} -> - result = - Ash.bulk_update(query, :update, %{}, - scope: scope, - return_records?: false, - return_errors?: true, - atomic_update: %{ - tags: - expr( - fragment("coalesce(?, '{}'::jsonb) || (?::jsonb)", ref(:tags), ^new_tags) - ) - } - ) + case Ash.count(query, scope: scope) do + {:ok, existing_count} -> + result = + Ash.bulk_update(query, :update, %{}, + scope: scope, + return_records?: false, + return_errors?: true, + atomic_update: %{ + tags: + expr( + fragment("coalesce(?, '{}'::jsonb) || (?::jsonb)", ref(:tags), ^new_tags) + ) + } + ) - case result do - %Ash.BulkResult{status: :success} -> - if existing_count < length(uids) do - {:error, "One or more devices were not found"} - else - {:ok, existing_count} - end + case result do + %Ash.BulkResult{status: :success} -> + if existing_count < length(uid_chunk) do + {:halt, {:error, "One or more devices were not found"}} + else + {:cont, {:ok, acc_count + existing_count}} + end - %Ash.BulkResult{status: :partial_success, errors: errors} -> - {:error, format_changeset_errors(List.first(errors || []))} + %Ash.BulkResult{status: :partial_success, errors: errors} -> + {:halt, {:error, format_changeset_errors(List.first(errors || []))}} - %Ash.BulkResult{status: :error, errors: errors} -> - {:error, format_changeset_errors(List.first(errors || []))} - end + %Ash.BulkResult{status: :error, errors: errors} -> + {:halt, {:error, format_changeset_errors(List.first(errors || []))}} + end - {:error, error} -> - {:error, format_changeset_errors(error)} - end + {:error, error} -> + {:halt, {:error, format_changeset_errors(error)}} + end + end) end ``` <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: The suggestion correctly identifies a potential scalability issue with large bulk updates and proposes a robust solution using chunking to prevent database errors, which is a critical fix for reliability. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Avoid infinite cleanup batching loops</summary> ___ **Add a case to handle when <code>Repo.delete_all/1</code> deletes zero records to prevent a <br>potential infinite loop in the <code>do_cleanup_batch</code> function.** [elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_data_cleanup_worker.ex [145-162]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-b8660522af1e1ad3ba8da56f754567fdae03d09fa9e18749a3a49435893073feR145-R162) ```diff case Repo.delete_all(delete_query) do + {0, _} -> + Logger.warning( + "SweepDataCleanupWorker: Delete returned 0 for #{table} in #{schema}; stopping batch to avoid infinite loop" + ) + + %{acc | errors: acc.errors + 1} + {count, _} -> Logger.debug( "SweepDataCleanupWorker: Deleted #{count} #{table} records from #{schema}" ) # Continue with next batch do_cleanup_batch( schema, table, resource, timestamp_field, cutoff, batch_size, extra_filter, %{acc | deleted: acc.deleted + count} ) end ``` <!-- /improve --apply_suggestion=8 --> <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: This suggestion correctly identifies and fixes a potential infinite loop in the cleanup worker, which is a significant robustness improvement. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>✅ <s>Prevent crashes from bad types</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The patch replaces the direct div/2 call with logic that validates/parses the raw response time value into an integer (accepting integers and integer-strings) before dividing, otherwise returning nil. This prevents div/2 crashes on bad types, aligning with the suggestion’s goal (though it does not add explicit float handling). code diff: ```diff # Parse response time (convert ns to ms) - response_time_ns = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"] - response_time_ms = if response_time_ns, do: div(response_time_ns, 1_000_000) + response_time_ns_raw = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"] + + response_time_ns = + cond do + is_integer(response_time_ns_raw) -> + response_time_ns_raw + + is_binary(response_time_ns_raw) -> + case Integer.parse(response_time_ns_raw) do + {parsed, ""} -> parsed + _ -> nil + end + + true -> + nil + end + + response_time_ms = + if is_integer(response_time_ns), + do: div(response_time_ns, 1_000_000), + else: nil ``` </details> ___ **Add type checking before calling <code>div/2</code> on <code>response_time_ns</code> to handle potential <br>float or string values and prevent crashes during result ingestion.** [elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex [304-305]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-06f4b3bf56e5f1d122b25040ea7f321125d6cae20606811dc0b2a0ddc7a66226R304-R305) ```diff response_time_ns = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"] -response_time_ms = if response_time_ns, do: div(response_time_ns, 1_000_000) +response_time_ms = + case response_time_ns do + ns when is_integer(ns) -> div(ns, 1_000_000) + ns when is_float(ns) -> ns |> trunc() |> div(1_000_000) + ns when is_binary(ns) -> + case Integer.parse(ns) do + {int, _rest} -> div(int, 1_000_000) + :error -> nil + end + + _ -> + nil + end + ``` `[Suggestion processed]` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly identifies that `div/2` will crash with non-integer inputs, and the proposed change adds robust type checking to prevent ingestion failures from malformed data. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>✅ <s>Include tenant id in actor</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The commit updates the fallback "system" actor map to include tenant_id: Scope.tenant_id(scope), matching the suggestion to ensure tenant-aware authorization. code diff: ```diff @@ -1876,10 +1881,15 @@ tenant_id: Scope.tenant_id(scope) } - _ -> - %{id: "system", email: "system@serviceradar", role: :admin} - end - end + _ -> + %{ + id: "system", + email: "system@serviceradar", + role: :admin, + tenant_id: Scope.tenant_id(scope) + } + end +end ``` </details> ___ **Add the <code>tenant_id</code> from the scope to the fallback "system" actor in <br><code>build_sweep_actor/1</code> to ensure correct authorization behavior.** [web-ng/lib/serviceradar_web_ng_web/live/device_live/show.ex [1869-1882]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-92d8e0af57d2f65dfb8562e896dbea17ef9f47074ffd14f58261decc76f80c24R1869-R1882) ```diff defp build_sweep_actor(scope) do case scope do %{user: user} when not is_nil(user) -> %{ id: user.id, email: user.email, role: user.role, tenant_id: Scope.tenant_id(scope) } _ -> - %{id: "system", email: "system@serviceradar", role: :admin} + %{ + id: "system", + email: "system@serviceradar", + role: :admin, + tenant_id: Scope.tenant_id(scope) + } end end ``` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly points out that the fallback actor is missing a `tenant_id`, which is important for authorization, and the proposed change correctly adds it. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>✅ <s>Keep selection mode consistent<!-- not_implemented --></s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The toggle_select_all handler was updated to assign selected_devices and also reset select_all_matching to false and total_matching_count to nil, matching the suggested UX consistency improvement. code diff: ```diff @@ -149,7 +149,11 @@ MapSet.union(selected, device_uids) end - {:noreply, assign(socket, :selected_devices, updated)} + {:noreply, + socket + |> assign(:selected_devices, updated) + |> assign(:select_all_matching, false) + |> assign(:total_matching_count, nil)} end ``` </details> ___ **Update the <code>toggle_select_all</code> event handler to reset the <code>select_all_matching</code> <br>state. This ensures that manually toggling the page selection provides clear and <br>consistent UI feedback.** [web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [132-153]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-261a01f4876e5984e1d9e9b38a3540675dfb0272abc30e6bdb2a4fa610353cc7R132-R153) ```diff def handle_event("toggle_select_all", _params, socket) do devices = socket.assigns.devices selected = socket.assigns.selected_devices device_uids = devices |> Enum.filter(&is_map/1) |> Enum.map(fn row -> Map.get(row, "uid") || Map.get(row, "id") end) |> Enum.filter(&is_binary/1) |> MapSet.new() updated = if MapSet.subset?(device_uids, selected) do # All current page devices selected, deselect them MapSet.difference(selected, device_uids) else # Select all current page devices MapSet.union(selected, device_uids) end - {:noreply, assign(socket, :selected_devices, updated)} + {:noreply, + socket + |> assign(:selected_devices, updated) + |> assign(:select_all_matching, false) + |> assign(:total_matching_count, nil)} end ``` <details><summary>Suggestion importance[1-10]: 6</summary> __ Why: This is a valid UX improvement that makes the bulk selection feature more consistent and less confusing for the user, aligning its behavior with `toggle_device_select`. </details></details></td><td align=center>Low </td></tr><tr><td rowspan=1>Security</td> <td> <details><summary>✅ <s>Enforce tenant context for reads</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The commit refactored load_sweep_results/2 to call get_sweep_tenant/1 inside a case expression, returning nil when tenant is nil and only building/executing the Ash query when a tenant is present, matching the suggested multitenancy guard. code diff: ```diff defp load_sweep_results(scope, ip) when is_binary(ip) do actor = build_sweep_actor(scope) - tenant = get_sweep_tenant(scope) - - require Ash.Query - - query = - SweepHostResult - |> Ash.Query.for_read(:by_ip, %{ip: ip}, actor: actor, tenant: tenant) - |> Ash.Query.sort(inserted_at: :desc) - |> Ash.Query.limit(10) - - case Ash.read(query, authorize?: true) do - {:ok, results} when results != [] -> - %{results: results, total: length(results)} - - _ -> + + case get_sweep_tenant(scope) do + nil -> nil + + tenant -> + require Ash.Query + + query = + SweepHostResult + |> Ash.Query.for_read(:by_ip, %{ip: ip}, actor: actor, tenant: tenant) + |> Ash.Query.sort(inserted_at: :desc) + |> Ash.Query.limit(10) + + case Ash.read(query, authorize?: true) do + {:ok, results} when results != [] -> + %{results: results, total: length(results)} + + _ -> + nil + end end end ``` </details> ___ **Add a guard to ensure a <code>tenant</code> is present before executing the Ash query in <br><code>load_sweep_results/2</code> to prevent potential multitenancy bypass.** [web-ng/lib/serviceradar_web_ng_web/live/device_live/show.ex [1846-1865]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-92d8e0af57d2f65dfb8562e896dbea17ef9f47074ffd14f58261decc76f80c24R1846-R1865) ```diff defp load_sweep_results(scope, ip) when is_binary(ip) do actor = build_sweep_actor(scope) - tenant = get_sweep_tenant(scope) - require Ash.Query + case get_sweep_tenant(scope) do + nil -> + nil - query = - SweepHostResult - |> Ash.Query.for_read(:by_ip, %{ip: ip}, actor: actor, tenant: tenant) - |> Ash.Query.sort(inserted_at: :desc) - |> Ash.Query.limit(10) + tenant -> + require Ash.Query - case Ash.read(query, authorize?: true) do - {:ok, results} when results != [] -> - %{results: results, total: length(results)} + query = + SweepHostResult + |> Ash.Query.for_read(:by_ip, %{ip: ip}, actor: actor, tenant: tenant) + |> Ash.Query.sort(inserted_at: :desc) + |> Ash.Query.limit(10) - _ -> - nil + case Ash.read(query, authorize?: true) do + {:ok, results} when results != [] -> + %{results: results, total: length(results)} + + _ -> + nil + end end end ``` <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: This is a valid security concern, as passing a `nil` tenant to an Ash query could bypass multitenancy checks, and the proposed change correctly adds a guard to prevent this. </details></details></td><td align=center>Medium </td></tr><tr><td rowspan=1>General</td> <td> <details><summary>✅ <s>Avoid quadratic list concatenation<!-- not_implemented --></s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The commit changed the accumulator updates from `uids ++ acc` to `[uids | acc]` in both pagination branches and updated `finalize_uid_acc/1` to `List.flatten()` after reversing, matching the suggested performance optimization. code diff: ```diff @@ -1203,7 +1215,7 @@ |> Enum.filter(&is_binary/1) next_cursor = Map.get(pagination, "next_cursor") - new_acc = uids ++ acc + new_acc = [uids | acc] if is_binary(next_cursor) do fetch_all_uids_paginated(srql_module, scope, query, next_cursor, new_acc) @@ -1218,7 +1230,7 @@ |> Enum.map(fn row -> Map.get(row, "uid") || Map.get(row, "id") end) |> Enum.filter(&is_binary/1) - finalize_uid_acc(uids ++ acc) + finalize_uid_acc([uids | acc]) _ -> finalize_uid_acc(acc) @@ -1228,6 +1240,7 @@ defp finalize_uid_acc(acc) do acc |> Enum.reverse() + |> List.flatten() |> Enum.uniq() ``` </details> ___ **Optimize the <code>fetch_all_uids_paginated</code> function by prepending UIDs to the <br>accumulator instead of using list concatenation. This improves performance and <br>reduces memory usage when fetching all matching device UIDs.** [web-ng/lib/serviceradar_web_ng_web/live/device_live/index.ex [1193-1232]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-261a01f4876e5984e1d9e9b38a3540675dfb0272abc30e6bdb2a4fa610353cc7R1193-R1232) ```diff defp fetch_all_uids_paginated(srql_module, scope, query, cursor, acc) do full_query = "in:devices #{query} limit:1000" opts = if cursor, do: %{scope: scope, cursor: cursor}, else: %{scope: scope} case srql_module.query(full_query, opts) do {:ok, %{"results" => results, "pagination" => pagination}} when is_list(results) -> uids = results |> Enum.filter(&is_map/1) |> Enum.map(fn row -> Map.get(row, "uid") || Map.get(row, "id") end) |> Enum.filter(&is_binary/1) next_cursor = Map.get(pagination, "next_cursor") - new_acc = uids ++ acc + new_acc = [uids | acc] if is_binary(next_cursor) do fetch_all_uids_paginated(srql_module, scope, query, next_cursor, new_acc) else finalize_uid_acc(new_acc) end {:ok, %{"results" => results}} when is_list(results) -> uids = results |> Enum.filter(&is_map/1) |> Enum.map(fn row -> Map.get(row, "uid") || Map.get(row, "id") end) |> Enum.filter(&is_binary/1) - finalize_uid_acc(uids ++ acc) + finalize_uid_acc([uids | acc]) _ -> finalize_uid_acc(acc) end end defp finalize_uid_acc(acc) do acc |> Enum.reverse() + |> List.flatten() |> Enum.uniq() end ``` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: This is a valid performance optimization that avoids quadratic complexity in list concatenation during pagination, which is important for the "select all" feature to work efficiently with large datasets. </details></details></td><td align=center>Medium </td></tr> <tr><td align="center" colspan="2"> <!-- /improve_multi --more_suggestions=true --> </td><td></td></tr></tbody></table> </details> <details><summary>✅ Suggestions up to commit 8f1186c</summary> <br><table><thead><tr><td><strong>Category</strong></td><td align=left><strong>Suggestion&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </strong></td><td align=center><strong>Impact</strong></td></tr><tbody><tr><td rowspan=1>Security</td> <td> <details><summary>Validate fields/operators before SRQL</summary> ___ **Harden the SRQL generation by whitelisting <code>field</code> and <code>operator</code> values in <br><code>criteria_clause/2</code> to prevent potential injection vulnerabilities.** [web-ng/lib/serviceradar_web_ng_web/live/settings/networks_live/index.ex [2280-2296]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-b2127e71582033bc6dfd2d7397f56bf43c1f7c613defffc504b3d8ee1e7406c4R2280-R2296) ```diff defp criteria_to_srql_query(criteria) when is_map(criteria) do clauses = criteria |> Enum.map(fn {field, spec} -> criteria_clause(field, spec) end) |> Enum.reject(fn clause -> is_nil(clause) or clause == "" end) Enum.join(clauses, " ") end defp criteria_clause(field, spec) when is_map(spec) do - case Map.to_list(spec) do - [{operator, value}] -> clause_for_operator(field, operator, value) - _ -> "" + if field_known?(field) do + case Map.to_list(spec) do + [{operator, value}] -> + if operator_known?(field, operator) do + clause_for_operator(field, operator, value) + else + "" + end + + _ -> + "" + end + else + "" end end defp criteria_clause(_field, _spec), do: "" ``` <details><summary>Suggestion importance[1-10]: 9</summary> __ Why: This is a critical security hardening suggestion that prevents potential SRQL injection by validating user-provided `field` and `operator` values against a known list before constructing the query. </details></details></td><td align=center>High </td></tr><tr><td rowspan=4>Possible issue</td> <td> <details><summary>✅ <s>Normalize ports before database insert</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The commit implements normalization of the open ports field by introducing an open_ports_raw variable and transforming it via List.wrap, Integer.parse for strings, rejecting nils, filtering to valid port range, and applying uniq/sort prior to insertion, preventing batch insert failures from bad data. code diff: ```diff # Parse open ports - open_ports = result["tcp_ports_open"] || result["tcpPortsOpen"] || [] + open_ports_raw = result["tcp_ports_open"] || result["tcpPortsOpen"] || [] + + open_ports = + open_ports_raw + |> List.wrap() + |> Enum.map(fn + port when is_integer(port) -> + port + + port when is_binary(port) -> + case Integer.parse(port) do + {parsed, ""} -> parsed + _ -> nil + end + + _ -> + nil + end) + |> Enum.reject(&is_nil/1) + |> Enum.filter(&(&1 >= 1 and &1 <= 65_535)) + |> Enum.uniq() + |> Enum.sort() ``` </details> ___ **Normalize the <code>open_ports</code> list by converting all values to integers and filtering <br>out invalid entries before database insertion to prevent batch insertion <br>failures.** [elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex [307-308]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-06f4b3bf56e5f1d122b25040ea7f321125d6cae20606811dc0b2a0ddc7a66226R307-R308) ```diff # Parse open ports -open_ports = result["tcp_ports_open"] || result["tcpPortsOpen"] || [] +open_ports_raw = result["tcp_ports_open"] || result["tcpPortsOpen"] || [] +open_ports = + open_ports_raw + |> List.wrap() + |> Enum.map(fn + p when is_integer(p) -> p + p when is_binary(p) -> + case Integer.parse(p) do + {n, ""} -> n + _ -> nil + end + + _ -> + nil + end) + |> Enum.reject(&is_nil/1) + |> Enum.filter(&(&1 >= 1 and &1 <= 65_535)) + |> Enum.uniq() + |> Enum.sort() + ``` <!-- /improve --apply_suggestion=1 --> <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: This suggestion correctly identifies a potential data type mismatch that could cause `Repo.insert_all` to fail for an entire batch, improving the robustness of the data ingestion pipeline. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>✅ <s>Prevent unbounded flush looping</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The commit changed handle_info(:flush) to call flush_queue/2 once, then either immediately re-sends :flush when more items remain or schedules the normal interval flush otherwise. It also removed the recursive flush_all_batches/2 helper, eliminating unbounded recursion in a single handle_info call. code diff: ```diff def handle_info(:flush, state) do - {state, _more?} = flush_all_batches(state, @default_flush_batch_size) - schedule_flush(state.flush_interval_ms) + {state, more?} = flush_queue(state, @default_flush_batch_size) + + if more? do + Process.send_after(self(), :flush, 0) + else + schedule_flush(state.flush_interval_ms) + end + {:noreply, state} end @@ -109,16 +115,6 @@ end end - defp flush_all_batches(state, batch_size) do - {state, more?} = flush_queue(state, batch_size) - - if more? do - flush_all_batches(state, batch_size) - else - {state, false} - end - end - defp schedule_flush(flush_interval_ms) do Process.send_after(self(), :flush, max(flush_interval_ms, 1_000)) end ``` </details> ___ **Modify the flush logic to process only one batch per <code>:flush</code> event and schedule a <br>subsequent flush if more items remain, instead of using unbounded recursion <br>within a single <code>handle_info</code> call.** [elixir/serviceradar_agent_gateway/lib/serviceradar_agent_gateway/status_buffer.ex [76-80]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-557fa02eff90dabc3757fea28b49208bb07d03698bd485219bc7301058dbb2c4R76-R80) ```diff def handle_info(:flush, state) do - {state, _more?} = flush_all_batches(state, @default_flush_batch_size) - schedule_flush(state.flush_interval_ms) + {state, more?} = flush_queue(state, @default_flush_batch_size) + + if more? do + Process.send_after(self(), :flush, 0) + else + schedule_flush(state.flush_interval_ms) + end + {:noreply, state} end ``` <!-- /improve --apply_suggestion=2 --> <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: This suggestion correctly identifies a GenServer anti-pattern that could block the process and cause timeouts, and the proposed fix is the standard, correct way to handle batch work without compromising responsiveness. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>✅ <s>Prevent response-time parsing crashes</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The commit implemented defensive parsing for icmp response time by introducing a raw value, converting strings via Integer.parse/1, and only performing div/2 when an integer is present, preventing potential runtime crashes. (It also added similar defensive parsing for open ports.) code diff: ```diff # Parse response time (convert ns to ms) - response_time_ns = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"] - response_time_ms = if response_time_ns, do: div(response_time_ns, 1_000_000) + response_time_ns_raw = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"] + + response_time_ns = + cond do + is_integer(response_time_ns_raw) -> + response_time_ns_raw + + is_binary(response_time_ns_raw) -> + case Integer.parse(response_time_ns_raw) do + {parsed, ""} -> parsed + _ -> nil + end + + true -> + nil + end + + response_time_ms = + if is_integer(response_time_ns), + do: div(response_time_ns, 1_000_000), + else: nil ``` </details> ___ **Defensively parse the <code>response_time_ns</code> value, handling non-integer types like <br>strings, to prevent crashes during the division operation and ensure batch <br>processing is not aborted.** [elixir/serviceradar_core/lib/serviceradar/sweep_jobs/sweep_results_ingestor.ex [303-305]](https://github.com/carverauto/serviceradar/pull/2263/files#diff-06f4b3bf56e5f1d122b25040ea7f321125d6cae20606811dc0b2a0ddc7a66226R303-R305) ```diff # Parse response time (convert ns to ms) -response_time_ns = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"] -response_time_ms = if response_time_ns, do: div(response_time_ns, 1_000_000) +response_time_ns_raw = result["icmp_response_time_ns"] || result["icmpResponseTimeNs"] +response_time_ns = + cond do + is_integer(response_time_ns_raw) -> + response_time_ns_raw + + is_binary(response_time_ns_raw) -> + case Integer.parse(response_time_ns_raw) do + {n, ""} -> n + _ -> nil + end + + true -> + nil + end + +response_time_ms = if is_integer(response_time_ns), do: div(response_time_ns, 1_000_000), else: nil + ``` <!-- /improve --apply_suggestion=3 --> <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly identifies that `div/2` will crash if `response_time_ns` is not an integer, which is plausible for JSON data, and this would halt the entire batch processing. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>✅ <s>Guard ETS access when unavailable</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The commit wraps all the specified public ETS access functions (put, invalidate, invalidate_tenant, invalidate_key, clear_all, stats) with an :ets.whereis(@table_name) == :undefined check, returning :ok (or zeroed stats) when the table is missing—matching the suggested robustness fix. code diff: ```diff @@ -92,11 +92,15 @@ """ @spec put(String.t(), atom(), String.t(), String.t() | nil, map()) :: :ok def put(tenant_id, config_type, partition, agent_id, entry, ttl_ms \\ @default_ttl_ms) do - key = cache_key(tenant_id, config_type, partition, agent_id) - expires_at = System.monotonic_time(:millisecond) + ttl_ms ...
qodo-code-review[bot] commented 2026-01-12 03:57:05 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/2263#issuecomment-3736796432
Original created: 2026-01-12T03:57:05Z

Persistent suggestions updated to latest commit 1964dc9

Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/2263#issuecomment-3736796432 Original created: 2026-01-12T03:57:05Z --- **[Persistent suggestions](https://github.com/carverauto/serviceradar/pull/2263#issuecomment-3736735024)** updated to latest commit 1964dc9
qodo-code-review[bot] commented 2026-01-12 04:15:37 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/2263#issuecomment-3736822791
Original created: 2026-01-12T04:15:37Z

Persistent suggestions updated to latest commit 8f1186c

Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/2263#issuecomment-3736822791 Original created: 2026-01-12T04:15:37Z --- **[Persistent suggestions](https://github.com/carverauto/serviceradar/pull/2263#issuecomment-3736735024)** updated to latest commit 8f1186c
Sign in to join this conversation.
No reviewers
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
carverauto/serviceradar!2652
No description provided.