Bug/srql analytics #2418

Merged
mfreeman451 merged 7 commits from refs/pull/2418/head into main 2025-11-18 08:47:50 +00:00
mfreeman451 commented 2025-11-18 04:47:55 +00:00 (Migrated from github.com)
Owner

Imported from GitHub pull request.

Original GitHub pull request: #1950
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/pull/1950
Original created: 2025-11-18T04:47:55Z
Original updated: 2025-11-18T08:48:16Z
Original head: carverauto/serviceradar:bug/srql_analytics
Original base: main
Original merged: 2025-11-18T08:47:50Z by @mfreeman451

User description

IMPORTANT: Please sign the Developer Certificate of Origin

Thank you for your contribution to ServiceRadar. Please note, when contributing, the developer must include
a DCO sign-off statement indicating the DCO acceptance in one commit message. Here
is an example DCO Signed-off-by line in a commit message:

Signed-off-by: J. Doe <j.doe@domain.com>

Describe your changes

Code checklist before requesting a review

  • I have signed the DCO?
  • The build completes without errors?
  • All tests are passing when running make test?

PR Type

Enhancement, Bug fix


Description

  • Extended SRQL query engine with support for new entity types: Interfaces, Services, OtelMetrics, TraceSummaries, and Traces

  • Added statistics aggregation support across logs, traces, metrics, and interfaces queries with flexible grouping and filtering

  • Implemented PostgreSQL TLS/SSL connection support with custom connection manager and root certificate configuration

  • Enhanced authorization handling in web API with cookie fallback for access tokens

  • Added open-ended absolute time ranges for queries with only start or end bounds

  • Improved shell script portability in Docker entrypoint with better string handling and SSL configuration via environment variables

  • Added comprehensive data models for new query entities with JSON serialization support

  • Extended parser to recognize new entity types and stats expressions

  • Updated Kubernetes configuration for OTel integration, debug logging, and ingress routing

  • Added SRQL service documentation covering authentication, rate limiting, and deployment checklist


Diagram Walkthrough

flowchart LR
  Parser["Parser<br/>New entities & stats"] -->|Route| QueryEngine["Query Engine<br/>execute_query"]
  QueryEngine -->|Dispatch| Handlers["Query Handlers<br/>Interfaces, Services,<br/>OtelMetrics, Traces,<br/>TraceSummaries"]
  Handlers -->|Execute| DB["PostgreSQL<br/>with TLS/SSL"]
  WebAPI["Web API<br/>Authorization"] -->|Fallback| CookieAuth["Cookie-based<br/>Access Token"]
  WebAPI -->|Query| QueryEngine
  Config["Config<br/>SSL Cert Path"] -->|Initialize| DB

File Walkthrough

Relevant files
Enhancement
13 files
route.ts
Improve authorization header resolution with cookie fallback

web/src/app/api/query/route.ts

  • Added resolveAuthorizationHeader() function to handle multiple
    authorization header sources
  • Function checks standard Authorization header (case-insensitive) and
    falls back to accessToken cookie
  • Refactored POST handler to use new helper function instead of inline
    header resolution
+17/-1   
trace_summaries.rs
Add trace summaries query execution module                             

rust/srql/src/query/trace_summaries.rs

  • New module implementing trace summaries query execution with SQL
    generation
  • Supports filtering, ordering, and statistics aggregation on trace data
  • Implements custom SQL binding and placeholder rewriting for PostgreSQL
    compatibility
  • Includes comprehensive stats expression parsing for count, status, and
    duration comparisons
+704/-0 
logs.rs
Add statistics aggregation support to logs queries             

rust/srql/src/query/logs.rs

  • Added stats query support with build_stats_query() function for
    aggregations
  • Implemented LogsStatsExpr enum supporting count() and
    group_uniq_array() expressions
  • Added filter clause builders for stats queries with text and numeric
    field support
  • Refactored severity_number filter formatting for consistency
+402/-42
otel_metrics.rs
Add OpenTelemetry metrics query module                                     

rust/srql/src/query/otel_metrics.rs

  • New module for OpenTelemetry metrics query execution
  • Supports filtering on trace/span IDs, service names, HTTP/gRPC
    attributes, and performance metrics
  • Implements stats queries with count aggregation and optional grouping
    by service name
  • Includes comprehensive field mapping and boolean value parsing
+490/-0 
interfaces.rs
Add discovered interfaces query module                                     

rust/srql/src/query/interfaces.rs

  • New module for discovered network interfaces query execution
  • Supports filtering on device properties, interface attributes, and IP
    address arrays
  • Implements count-based stats queries with optional grouping
  • Includes ordering by timestamp, device IP, interface name, and other
    attributes
+345/-0 
models.rs
Add data models for new query entities                                     

rust/srql/src/models.rs

  • Added DiscoveredInterfaceRow struct with JSON serialization for
    network interface data
  • Added TraceSpanRow struct for OpenTelemetry trace span data
  • Added ServiceStatusRow struct for service availability status
  • Added OtelMetricRow struct for OpenTelemetry metrics
  • Added TraceSummaryRow struct with QueryableByName for trace summary
    aggregations
+235/-0 
traces.rs
Add OpenTelemetry traces query module                                       

rust/srql/src/query/traces.rs

  • New module for OpenTelemetry trace span query execution
  • Supports filtering on trace/span IDs, service attributes, and status
    codes
  • Implements status code and span kind filters with equality and list
    comparisons
  • Includes ordering by timestamp, service name, and time ranges
+254/-0 
services.rs
Add service status query module                                                   

rust/srql/src/query/services.rs

  • New module for service status query execution
  • Supports filtering on service name, type, availability status, and
    agent/poller IDs
  • Implements boolean value parsing for availability field
  • Includes ordering by timestamp, service name, and service type
+165/-0 
time.rs
Support open-ended absolute time ranges                                   

rust/srql/src/time.rs

  • Added AbsoluteOpenEnd variant for time ranges with only start bound
  • Added AbsoluteOpenStart variant for time ranges with only end bound
  • Updated resolve() method to handle open-ended ranges using current
    time or MIN_UTC
  • Added unit tests for open-ended absolute time range parsing
+73/-3   
db.rs
Add PostgreSQL TLS/SSL connection support                               

rust/srql/src/db.rs

  • Replaced AsyncDieselConnectionManager with custom PgConnectionManager
    for TLS support
  • Added PgTls enum supporting both unencrypted and Rustls-based
    connections
  • Implemented ManageConnection trait for connection pooling with
    validation
  • Added TLS connector builder using root certificates from PGSSLROOTCERT
    environment variable
  • Added initial connectivity check on pool creation
+95/-8   
parser.rs
Extend parser with new entities and stats support               

rust/srql/src/parser.rs

  • Added new entity types: Interfaces, Services, OtelMetrics,
    TraceSummaries, Traces
  • Added stats field to QueryAst for aggregation expressions
  • Updated entity parsing to recognize new entity aliases
  • Added test for stats expression parsing and interfaces entity
+32/-1   
mod.rs
Integrate new query modules into query engine                       

rust/srql/src/query/mod.rs

  • Added module declarations for new query handlers: interfaces,
    otel_metrics, services, trace_summaries, traces
  • Updated execute_query() to route new entity types to respective
    handlers
  • Updated translate_query() to generate debug SQL for new entities
  • Added stats field to QueryPlan struct
  • Added error logging for database connection failures
+22/-5   
schema.rs
Add database schema definitions for new tables                     

rust/srql/src/schema.rs

  • Added Diesel table definitions for service_status table
  • Added Diesel table definitions for discovered_interfaces table with
    array support
  • Added Diesel table definitions for otel_traces table
  • Added Diesel table definitions for otel_metrics table
+94/-0   
Configuration changes
6 files
config.rs
Add PostgreSQL SSL certificate configuration                         

rust/srql/src/config.rs

  • Added pg_ssl_root_cert optional field to AppConfig
  • Loads certificate path from PGSSLROOTCERT environment variable
+2/-0     
configmap.yaml
Update Kubernetes configuration for OTel and API routing 

k8s/demo/base/configmap.yaml

  • Removed separate Kong proxy location for /api/query endpoint
  • Updated OpenTelemetry configuration to enable it by default
  • Changed OTel endpoint from localhost to serviceradar-otel:4317
  • Added TLS configuration with certificate files for OTel exporter
  • Added API key header for OTel collector authentication
+13/-13 
kustomization.yaml
Update staging deployment image tags                                         

k8s/demo/staging/kustomization.yaml

  • Updated serviceradar-web image tag to specific SHA commit hash
  • Updated serviceradar-srql image tag to match web service commit
+2/-2     
BUILD.bazel
Enable Procedural Macro Dependencies in Bazel Build           

rust/srql/BUILD.bazel

  • Added proc_macro_deps to rust_library target to support procedural
    macros from dependencies
  • Added proc_macro_deps to rust_binary target for consistent macro
    support in the binary
+2/-0     
serviceradar-srql.yaml
Enable Debug Logging and Backtrace for SRQL Service           

k8s/demo/base/serviceradar-srql.yaml

  • Added RUST_BACKTRACE=1 environment variable for enhanced error
    diagnostics
  • Added RUST_LOG=srql=debug environment variable to enable debug-level
    logging for the srql module
+4/-0     
ingress.yaml
Update Ingress Backend Service Configuration                         

k8s/demo/staging/ingress.yaml

  • Changed ingress backend service from serviceradar-kong to
    serviceradar-web
  • Changed backend service port from 8000 to 3000 for the root path
    routing
+2/-2     
Error handling
1 files
server.rs
Add error logging to query handler                                             

rust/srql/src/server.rs

  • Added error logging when SRQL query execution fails
+4/-1     
Bug fix
1 files
entrypoint-srql.sh
Improve shell script portability and SSL configuration     

docker/compose/entrypoint-srql.sh

  • Fixed urlencode() function to handle string length correctly using wc
    -c
  • Replaced printf -v with od for hex encoding to improve portability
  • Added escape_conn_value() function for libpq connection string
    escaping
  • Refactored database URL construction to use environment variables for
    SSL settings
  • Changed from URL query parameters to environment variables for
    PGSSLROOTCERT and PGSSLMODE
+23/-10 
Documentation
1 files
srql-service.md
SRQL Service Configuration and Security Documentation       

docs/docs/srql-service.md

  • New documentation file for SRQL Service Configuration covering
    authentication and rate limiting
  • Documents SRQL_API_KEY and SRQL_API_KEY_KV_KEY environment variables
    for API key authentication
  • Explains rate limiting configuration with SRQL_RATE_LIMIT_MAX and
    SRQL_RATE_LIMIT_WINDOW_SECS
  • Provides deployment checklist with ingress TLS, datasvc connectivity,
    and verification steps
+74/-0   
Dependencies
1 files
Cargo.toml
Add Database and Async Trait Dependencies                               

rust/srql/Cargo.toml

  • Added async-trait dependency for async trait support
  • Added database connection pool and PostgreSQL client dependencies
    (bb8, tokio-postgres, tokio-postgres-rustls)
  • Added TLS/cryptography dependencies (rustls, rustls-pemfile) for
    secure database connections
+6/-0     

Imported from GitHub pull request. Original GitHub pull request: #1950 Original author: @mfreeman451 Original URL: https://github.com/carverauto/serviceradar/pull/1950 Original created: 2025-11-18T04:47:55Z Original updated: 2025-11-18T08:48:16Z Original head: carverauto/serviceradar:bug/srql_analytics Original base: main Original merged: 2025-11-18T08:47:50Z by @mfreeman451 --- ### **User description** ## IMPORTANT: Please sign the Developer Certificate of Origin Thank you for your contribution to ServiceRadar. Please note, when contributing, the developer must include a [DCO sign-off statement]( https://developercertificate.org/) indicating the DCO acceptance in one commit message. Here is an example DCO Signed-off-by line in a commit message: ``` Signed-off-by: J. Doe <j.doe@domain.com> ``` ## Describe your changes ## Issue ticket number and link ## Code checklist before requesting a review - [ ] I have signed the DCO? - [ ] The build completes without errors? - [ ] All tests are passing when running make test? ___ ### **PR Type** Enhancement, Bug fix ___ ### **Description** - **Extended SRQL query engine** with support for new entity types: `Interfaces`, `Services`, `OtelMetrics`, `TraceSummaries`, and `Traces` - **Added statistics aggregation** support across logs, traces, metrics, and interfaces queries with flexible grouping and filtering - **Implemented PostgreSQL TLS/SSL connection support** with custom connection manager and root certificate configuration - **Enhanced authorization handling** in web API with cookie fallback for access tokens - **Added open-ended absolute time ranges** for queries with only start or end bounds - **Improved shell script portability** in Docker entrypoint with better string handling and SSL configuration via environment variables - **Added comprehensive data models** for new query entities with JSON serialization support - **Extended parser** to recognize new entity types and stats expressions - **Updated Kubernetes configuration** for OTel integration, debug logging, and ingress routing - **Added SRQL service documentation** covering authentication, rate limiting, and deployment checklist ___ ### Diagram Walkthrough ```mermaid flowchart LR Parser["Parser<br/>New entities & stats"] -->|Route| QueryEngine["Query Engine<br/>execute_query"] QueryEngine -->|Dispatch| Handlers["Query Handlers<br/>Interfaces, Services,<br/>OtelMetrics, Traces,<br/>TraceSummaries"] Handlers -->|Execute| DB["PostgreSQL<br/>with TLS/SSL"] WebAPI["Web API<br/>Authorization"] -->|Fallback| CookieAuth["Cookie-based<br/>Access Token"] WebAPI -->|Query| QueryEngine Config["Config<br/>SSL Cert Path"] -->|Initialize| DB ``` <details> <summary><h3> File Walkthrough</h3></summary> <table><thead><tr><th></th><th align="left">Relevant files</th></tr></thead><tbody><tr><td><strong>Enhancement</strong></td><td><details><summary>13 files</summary><table> <tr> <td> <details> <summary><strong>route.ts</strong><dd><code>Improve authorization header resolution with cookie fallback</code></dd></summary> <hr> web/src/app/api/query/route.ts <ul><li>Added <code>resolveAuthorizationHeader()</code> function to handle multiple <br>authorization header sources<br> <li> Function checks standard <code>Authorization</code> header (case-insensitive) and <br>falls back to <code>accessToken</code> cookie<br> <li> Refactored POST handler to use new helper function instead of inline <br>header resolution</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-53eee72ee62c074b7ff0b7556920802805621e9e16bfa91f85675ec55b66bf32">+17/-1</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>trace_summaries.rs</strong><dd><code>Add trace summaries query execution module</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/query/trace_summaries.rs <ul><li>New module implementing trace summaries query execution with SQL <br>generation<br> <li> Supports filtering, ordering, and statistics aggregation on trace data<br> <li> Implements custom SQL binding and placeholder rewriting for PostgreSQL <br>compatibility<br> <li> Includes comprehensive stats expression parsing for count, status, and <br>duration comparisons</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-ff98619eb5dc4d75205d1797477e9e2880f42081e8d0c61d6fd1d6e5926b72fa">+704/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>logs.rs</strong><dd><code>Add statistics aggregation support to logs queries</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/query/logs.rs <ul><li>Added stats query support with <code>build_stats_query()</code> function for <br>aggregations<br> <li> Implemented <code>LogsStatsExpr</code> enum supporting <code>count()</code> and <br><code>group_uniq_array()</code> expressions<br> <li> Added filter clause builders for stats queries with text and numeric <br>field support<br> <li> Refactored severity_number filter formatting for consistency</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-f4d3b33667f2e79a6ebe4cfff931f93c728d9a81c305ac13586e623850a504db">+402/-42</a></td> </tr> <tr> <td> <details> <summary><strong>otel_metrics.rs</strong><dd><code>Add OpenTelemetry metrics query module</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/query/otel_metrics.rs <ul><li>New module for OpenTelemetry metrics query execution<br> <li> Supports filtering on trace/span IDs, service names, HTTP/gRPC <br>attributes, and performance metrics<br> <li> Implements stats queries with count aggregation and optional grouping <br>by service name<br> <li> Includes comprehensive field mapping and boolean value parsing</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-8106bfa2099b8a6d945b338532f8b1108467e2f0592ddbd64d546fcbce3e3613">+490/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>interfaces.rs</strong><dd><code>Add discovered interfaces query module</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/query/interfaces.rs <ul><li>New module for discovered network interfaces query execution<br> <li> Supports filtering on device properties, interface attributes, and IP <br>address arrays<br> <li> Implements count-based stats queries with optional grouping<br> <li> Includes ordering by timestamp, device IP, interface name, and other <br>attributes</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-1ec833d4525fb2806523888b8c57b76ca8dd2ab70539368ccecc4d262769c8c5">+345/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>models.rs</strong><dd><code>Add data models for new query entities</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/models.rs <ul><li>Added <code>DiscoveredInterfaceRow</code> struct with JSON serialization for <br>network interface data<br> <li> Added <code>TraceSpanRow</code> struct for OpenTelemetry trace span data<br> <li> Added <code>ServiceStatusRow</code> struct for service availability status<br> <li> Added <code>OtelMetricRow</code> struct for OpenTelemetry metrics<br> <li> Added <code>TraceSummaryRow</code> struct with <code>QueryableByName</code> for trace summary <br>aggregations</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-57c82b01f92e9bd40063d0b0178c12d452771ac133f2121fb0ac008b167da367">+235/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>traces.rs</strong><dd><code>Add OpenTelemetry traces query module</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/query/traces.rs <ul><li>New module for OpenTelemetry trace span query execution<br> <li> Supports filtering on trace/span IDs, service attributes, and status <br>codes<br> <li> Implements status code and span kind filters with equality and list <br>comparisons<br> <li> Includes ordering by timestamp, service name, and time ranges</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-f390b0aef82baa9c3438719276dca70be826318d7446074a83b4527558528e19">+254/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>services.rs</strong><dd><code>Add service status query module</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/query/services.rs <ul><li>New module for service status query execution<br> <li> Supports filtering on service name, type, availability status, and <br>agent/poller IDs<br> <li> Implements boolean value parsing for availability field<br> <li> Includes ordering by timestamp, service name, and service type</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-ad5e7e1a352406ed473a765de4856625f1bede590e7e2c724163bcd1e7b313e9">+165/-0</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>time.rs</strong><dd><code>Support open-ended absolute time ranges</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/time.rs <ul><li>Added <code>AbsoluteOpenEnd</code> variant for time ranges with only start bound<br> <li> Added <code>AbsoluteOpenStart</code> variant for time ranges with only end bound<br> <li> Updated <code>resolve()</code> method to handle open-ended ranges using current <br>time or MIN_UTC<br> <li> Added unit tests for open-ended absolute time range parsing</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-53912bc0c1e18ee038a27561478ebac74affd9d13a4fdf24e760a1694ea5b587">+73/-3</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>db.rs</strong><dd><code>Add PostgreSQL TLS/SSL connection support</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/db.rs <ul><li>Replaced <code>AsyncDieselConnectionManager</code> with custom <code>PgConnectionManager</code> <br>for TLS support<br> <li> Added <code>PgTls</code> enum supporting both unencrypted and Rustls-based <br>connections<br> <li> Implemented <code>ManageConnection</code> trait for connection pooling with <br>validation<br> <li> Added TLS connector builder using root certificates from <code>PGSSLROOTCERT</code> <br>environment variable<br> <li> Added initial connectivity check on pool creation</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-8d5a0c48024aa12a3e06de065ca71e940d2b5007fbc34ef87471b90d7937891c">+95/-8</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>parser.rs</strong><dd><code>Extend parser with new entities and stats support</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/parser.rs <ul><li>Added new entity types: <code>Interfaces</code>, <code>Services</code>, <code>OtelMetrics</code>, <br><code>TraceSummaries</code>, <code>Traces</code><br> <li> Added <code>stats</code> field to <code>QueryAst</code> for aggregation expressions<br> <li> Updated entity parsing to recognize new entity aliases<br> <li> Added test for stats expression parsing and interfaces entity</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-b2edf55d1721185349ecddb2f4eacc42e0dfcae19b6c2bc638602f187da67e66">+32/-1</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>mod.rs</strong><dd><code>Integrate new query modules into query engine</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/query/mod.rs <ul><li>Added module declarations for new query handlers: <code>interfaces</code>, <br><code>otel_metrics</code>, <code>services</code>, <code>trace_summaries</code>, <code>traces</code><br> <li> Updated <code>execute_query()</code> to route new entity types to respective <br>handlers<br> <li> Updated <code>translate_query()</code> to generate debug SQL for new entities<br> <li> Added <code>stats</code> field to <code>QueryPlan</code> struct<br> <li> Added error logging for database connection failures</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-393e3fa3d0e41741834cd7cd398a06111ab7b141ae6caca7a5dcc0e036172491">+22/-5</a>&nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>schema.rs</strong><dd><code>Add database schema definitions for new tables</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/schema.rs <ul><li>Added Diesel table definitions for <code>service_status</code> table<br> <li> Added Diesel table definitions for <code>discovered_interfaces</code> table with <br>array support<br> <li> Added Diesel table definitions for <code>otel_traces</code> table<br> <li> Added Diesel table definitions for <code>otel_metrics</code> table</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-2fe4906f42b52f96b7bc2d3431885b996b6701cfb88416905ae130b472d536ea">+94/-0</a>&nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Configuration changes</strong></td><td><details><summary>6 files</summary><table> <tr> <td> <details> <summary><strong>config.rs</strong><dd><code>Add PostgreSQL SSL certificate configuration</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/config.rs <ul><li>Added <code>pg_ssl_root_cert</code> optional field to <code>AppConfig</code><br> <li> Loads certificate path from <code>PGSSLROOTCERT</code> environment variable</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-8a094b8b89c66adf329551634fa2c9c0e2ad9a1043f42012ad11cd3bf3f8ab6a">+2/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>configmap.yaml</strong><dd><code>Update Kubernetes configuration for OTel and API routing</code>&nbsp; </dd></summary> <hr> k8s/demo/base/configmap.yaml <ul><li>Removed separate Kong proxy location for <code>/api/query</code> endpoint<br> <li> Updated OpenTelemetry configuration to enable it by default<br> <li> Changed OTel endpoint from localhost to <code>serviceradar-otel:4317</code><br> <li> Added TLS configuration with certificate files for OTel exporter<br> <li> Added API key header for OTel collector authentication</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-f4548beaa0a3a01a46971c82c5647a0f3f49eb38d66dd939d06d19018173fcd6">+13/-13</a>&nbsp; </td> </tr> <tr> <td> <details> <summary><strong>kustomization.yaml</strong><dd><code>Update staging deployment image tags</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> k8s/demo/staging/kustomization.yaml <ul><li>Updated <code>serviceradar-web</code> image tag to specific SHA commit hash<br> <li> Updated <code>serviceradar-srql</code> image tag to match web service commit</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-ae7d8d4134a595a9d278924988f58e1843ad4d5d24b4df3b2c976dd3610a1b64">+2/-2</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>BUILD.bazel</strong><dd><code>Enable Procedural Macro Dependencies in Bazel Build</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/BUILD.bazel <ul><li>Added <code>proc_macro_deps</code> to <code>rust_library</code> target to support procedural <br>macros from dependencies<br> <li> Added <code>proc_macro_deps</code> to <code>rust_binary</code> target for consistent macro <br>support in the binary</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-92fd182413864e6ff93d5dd84e774dd0feab2d73c7e7a56f3519e76b121dd8f7">+2/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>serviceradar-srql.yaml</strong><dd><code>Enable Debug Logging and Backtrace for SRQL Service</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> k8s/demo/base/serviceradar-srql.yaml <ul><li>Added <code>RUST_BACKTRACE=1</code> environment variable for enhanced error <br>diagnostics<br> <li> Added <code>RUST_LOG=srql=debug</code> environment variable to enable debug-level <br>logging for the srql module</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-671bb3bfd8bad4c29730ae3c96b9b78e1d446fa378742bd2091b1edad63fec87">+4/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td> <details> <summary><strong>ingress.yaml</strong><dd><code>Update Ingress Backend Service Configuration</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> k8s/demo/staging/ingress.yaml <ul><li>Changed ingress backend service from <code>serviceradar-kong</code> to <br><code>serviceradar-web</code><br> <li> Changed backend service port from <code>8000</code> to <code>3000</code> for the root path <br>routing</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-b73659cd220fe91eaf5c98cfc0844d203477e1a985d06a87ecd16e79c3ec0ca9">+2/-2</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Error handling</strong></td><td><details><summary>1 files</summary><table> <tr> <td> <details> <summary><strong>server.rs</strong><dd><code>Add error logging to query handler</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/src/server.rs - Added error logging when SRQL query execution fails </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-2fc110913a9b9140116cb47742007161b4016b81cc0c325a2b635905c12a3811">+4/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Bug fix</strong></td><td><details><summary>1 files</summary><table> <tr> <td> <details> <summary><strong>entrypoint-srql.sh</strong><dd><code>Improve shell script portability and SSL configuration</code>&nbsp; &nbsp; &nbsp; </dd></summary> <hr> docker/compose/entrypoint-srql.sh <ul><li>Fixed <code>urlencode()</code> function to handle string length correctly using <code>wc </code><br><code>-c</code><br> <li> Replaced <code>printf -v</code> with <code>od</code> for hex encoding to improve portability<br> <li> Added <code>escape_conn_value()</code> function for libpq connection string <br>escaping<br> <li> Refactored database URL construction to use environment variables for <br>SSL settings<br> <li> Changed from URL query parameters to environment variables for <br><code>PGSSLROOTCERT</code> and <code>PGSSLMODE</code></ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-16cf075815de9081558fd4ee53d1138c488cd8756b43259be4a82c86dac514ac">+23/-10</a>&nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Documentation</strong></td><td><details><summary>1 files</summary><table> <tr> <td> <details> <summary><strong>srql-service.md</strong><dd><code>SRQL Service Configuration and Security Documentation</code>&nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> docs/docs/srql-service.md <ul><li>New documentation file for SRQL Service Configuration covering <br>authentication and rate limiting<br> <li> Documents <code>SRQL_API_KEY</code> and <code>SRQL_API_KEY_KV_KEY</code> environment variables <br>for API key authentication<br> <li> Explains rate limiting configuration with <code>SRQL_RATE_LIMIT_MAX</code> and <br><code>SRQL_RATE_LIMIT_WINDOW_SECS</code><br> <li> Provides deployment checklist with ingress TLS, datasvc connectivity, <br>and verification steps</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-9b2d1be11daf488059f9debfbdd2b5ebe51286a7e2c8b0d134112a6c51132ba6">+74/-0</a>&nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Dependencies</strong></td><td><details><summary>1 files</summary><table> <tr> <td> <details> <summary><strong>Cargo.toml</strong><dd><code>Add Database and Async Trait Dependencies</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></summary> <hr> rust/srql/Cargo.toml <ul><li>Added <code>async-trait</code> dependency for async trait support<br> <li> Added database connection pool and PostgreSQL client dependencies <br>(<code>bb8</code>, <code>tokio-postgres</code>, <code>tokio-postgres-rustls</code>)<br> <li> Added TLS/cryptography dependencies (<code>rustls</code>, <code>rustls-pemfile</code>) for <br>secure database connections</ul> </details> </td> <td><a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-8820eedf4ac2fc0e3a187999b96b922256704dfff7fd4477b3fcdb9ef4300533">+6/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></details></td></tr></tr></tbody></table> </details> ___
qodo-code-review[bot] commented 2025-11-18 04:48:58 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/1950#issuecomment-3545019641
Original created: 2025-11-18T04:48:58Z

PR Compliance Guide 🔍

(Compliance updated until commit github.com/carverauto/serviceradar@9d3aedb45f)

Below is a summary of compliance checks for this PR:

Security Compliance
Insecure auth fallback

Description: Authorization header is populated from the 'accessToken' cookie without HttpOnly/secure
validation, enabling bearer token theft via client-writable cookies or CSRF header
injection paths; ensure the cookie is HttpOnly/SameSite and only used in trusted contexts.

route.ts [86-91]

Referred Code
const cookieToken = req.cookies.get("accessToken")?.value?.trim();
if (cookieToken) {
  return `Bearer ${cookieToken}`;
}

return null;
SQL injection risk

Description: Raw SQL is constructed via string concatenation and manual placeholder rewriting; although
values are bound, the dynamic JSON key assembly and column names for stats may permit SQL
injection if alias/field parsing is bypassed—verify strict validation and avoid
concatenating untrusted identifiers.
logs.rs [159-168]

Referred Code
    let mut sql = String::from("SELECT jsonb_build_object(");
    sql.push_str(&parts.join(", "));
    sql.push_str(") AS payload\nFROM logs");
    if !clauses.is_empty() {
        sql.push_str("\nWHERE ");
        sql.push_str(&clauses.join(" AND "));
    }

    Ok(Some(LogsStatsSql { sql, binds }))
}
SQL injection risk

Description: Stats SQL string concatenates response keys, group column names, and ORDER BY expressions;
while values are bound, concatenated identifiers increase SQL injection surface if parsing
is bypassed—confirm whitelist of group fields and aliases cannot contain unsafe
characters.
otel_metrics.rs [296-323]

Referred Code
if let Some(group_field) = stats.group_field {
    let column = group_field.column();
    sql.push_str(&format!(
        "jsonb_build_object('{}', {column}, '{}', COUNT(*)) AS payload",
        group_field.response_key(),
        stats.alias
    ));
} else {
    sql.push_str(&format!(
        "jsonb_build_object('{}', COUNT(*)) AS payload",
        stats.alias
    ));
}
sql.push_str("\nFROM otel_metrics");
if !clauses.is_empty() {
    sql.push_str("\nWHERE ");
    sql.push_str(&clauses.join(" AND "));
}

if stats.group_field.is_some() {
    let column = stats.group_field.unwrap().column();


 ... (clipped 7 lines)
SQL injection risk

Description: Dynamic stats JSON is built by interpolating parsed aliases into SQL and aggregations;
despite parameter binding for values, alias-to-SQL insertion can allow injection if
validation misses edge cases—ensure aliases are strictly alphanumeric/underscore and
expressions are fully validated.
trace_summaries.rs [170-185]

Referred Code
    let stats = parse_stats(raw_stats)?;
    let (select_sql, mut stats_binds) = build_stats_select(&stats)?;
    sql.push_str("SELECT ");
    sql.push_str(&select_sql);
    sql.push_str(" AS payload\nFROM trace_summaries");
    binds.append(&mut stats_binds);
    let (where_sql, mut where_binds) = build_filters_clause(plan)?;
    sql.push_str(&where_sql);
    binds.append(&mut where_binds);
    return Ok(TraceSummarySql {
        sql,
        binds,
        mode: SummaryMode::Stats,
    });
}

Sensitive data exposure

Description: The JSON output duplicates 'attributes' into both 'attributes' and 'raw_data' with
different sources, which can expose unfiltered trace span data unintentionally; verify
that sensitive attributes are sanitized before serialization.
models.rs [206-229]

Referred Code
pub fn into_json(self) -> serde_json::Value {
    serde_json::json!({
        "timestamp": self.timestamp,
        "trace_id": self.trace_id,
        "span_id": self.span_id,
        "parent_span_id": self.parent_span_id,
        "name": self.name,
        "kind": self.kind,
        "start_time_unix_nano": self.start_time_unix_nano,
        "end_time_unix_nano": self.end_time_unix_nano,
        "service_name": self.service_name,
        "service_version": self.service_version,
        "service_instance": self.service_instance,
        "scope_name": self.scope_name,
        "scope_version": self.scope_version,
        "status_code": self.status_code,
        "status_message": self.status_message,
        "attributes": self.attributes.clone(),
        "resource_attributes": self.resource_attributes,
        "events": self.events,
        "links": self.links,


 ... (clipped 3 lines)
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status:
Error logging: The added error logging records failures, but it is unclear whether all critical actions
and their outcomes (e.g., auth decisions, query execution context) are consistently logged
across new handlers and stats paths.

Referred Code

match response {
    Ok(rows) => Ok(Json(rows)),
    Err(err) => {
        error!(error = ?err, "srql query failed");
        Err(err)
    }
}

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Edge cases: Custom SQL builders perform parsing and binding but some branches return generic invalid
request errors and may not validate extremely large IN lists or pagination bounds,
requiring verification of limits and edge-case handling.

Referred Code
fn build_filters_clause(plan: &QueryPlan) -> Result<(String, Vec<SqlBindValue>)> {
    let mut clauses = Vec::new();
    let mut binds = Vec::new();

    for filter in &plan.filters {
        match filter.field.as_str() {
            "trace_id" => add_text_condition(&mut clauses, &mut binds, "trace_id", filter)?,
            "root_span_id" => add_text_condition(&mut clauses, &mut binds, "root_span_id", filter)?,
            "root_span_name" => {
                add_text_condition(&mut clauses, &mut binds, "root_span_name", filter)?
            }
            "root_service_name" => {
                add_text_condition(&mut clauses, &mut binds, "root_service_name", filter)?
            }
            "status_code" => add_int_condition(&mut clauses, &mut binds, "status_code", filter)?,
            "root_span_kind" => {
                add_int_condition(&mut clauses, &mut binds, "root_span_kind", filter)?
            }
            "span_count" => add_i64_condition(&mut clauses, &mut binds, "span_count", filter)?,
            "error_count" => add_i64_condition(&mut clauses, &mut binds, "error_count", filter)?,
            "duration_ms" => add_float_condition(&mut clauses, &mut binds, "duration_ms", filter)?,


 ... (clipped 115 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status:
User error detail: Several paths return specific validation messages (e.g., unsupported fields/expressions)
which may be user-facing via the API; confirm these are mapped to safe, generic client
errors while detailed info is only logged.

Referred Code
            "duration_ms" => add_float_condition(&mut clauses, &mut binds, "duration_ms", filter)?,
            other => {
                return Err(ServiceError::InvalidRequest(format!(
                    "unsupported filter field '{other}'"
                )))
            }
        }
    }

    if clauses.is_empty() {
        Ok((String::new(), binds))
    } else {
        Ok((format!("\nWHERE {}", clauses.join(" AND ")), binds))
    }
}

fn add_text_condition(
    clauses: &mut Vec<String>,
    binds: &mut Vec<SqlBindValue>,
    column: &str,
    filter: &Filter,


 ... (clipped 446 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status:
Bind logging: On query failure, code logs SQL text and bind values which could expose sensitive data
from filters; verify logs redact or omit sensitive bind contents.

Referred Code
        error!(
            error = ?err,
            sql = %sql_debug,
            binds = ?binds_debug,
            "trace summaries data query failed"
        );
        ServiceError::Internal(err.into())
    })?;
    Ok(rows.into_iter().map(TraceSummaryRow::into_json).collect())
}
SummaryMode::Stats => {
    let sql_debug = summary_query.sql.clone();
    let binds_debug = summary_query.binds.clone();
    let query = summary_query.to_boxed_query();
    let rows: Vec<TraceStatsPayload> = query.load(conn).await.map_err(|err| {
        error!(
            error = ?err,
            sql = %sql_debug,
            binds = ?binds_debug,
            "trace summaries stats query failed"
        );


 ... (clipped 9 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status:
Cookie auth: The new authorization resolver falls back to a Bearer token from the accessToken cookie;
confirm cookie security attributes and server-side validation to prevent token misuse.

Referred Code
function resolveAuthorizationHeader(req: NextRequest): string | null {
  const header =
    req.headers.get("Authorization") || req.headers.get("authorization") || "";
  const trimmed = header.trim();
  if (trimmed) {
    return trimmed;
  }

  const cookieToken = req.cookies.get("accessToken")?.value?.trim();
  if (cookieToken) {
    return `Bearer ${cookieToken}`;
  }

  return null;
}

Learn more about managing compliance generic rules or creating your own custom rules

Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
- Requires Further Human Verification
🏷️ - Compliance label

Previous compliance checks

Compliance check up to commit 5f5004f
Security Compliance
CSRF via cookie-to-bearer

Description: The new authorization resolution falls back to the 'accessToken' cookie and blindly
converts it to a 'Bearer' header, which may enable CSRF on the backend if the SRQL
endpoint trusts the Authorization header and CORS/CSRF protections are not enforced for
this internal call.
route.ts [86-91]

Referred Code
const cookieToken = req.cookies.get("accessToken")?.value?.trim();
if (cookieToken) {
  return `Bearer ${cookieToken}`;
}

return null;
DB resource exhaustion

Description: The dynamic SQL stats filters construct 'IN/NOT IN' lists by expanding multiple '?'
placeholders based on user-provided lists; while values are still bound safely, the
absence of a maximum list length or total placeholder cap can be abused to create very
large queries and cause database resource exhaustion (DoS).
logs.rs [441-520]

Referred Code
fn build_text_clause(column: &str, filter: &Filter) -> Result<Option<(String, Vec<SqlBindValue>)>> {
    let mut binds = Vec::new();
    let clause = match filter.op {
        FilterOp::Eq => {
            binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string()));
            format!("{column} = ?")
        }
        FilterOp::NotEq => {
            binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string()));
            format!("{column} <> ?")
        }
        FilterOp::Like => {
            binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string()));
            format!("{column} ILIKE ?")
        }
        FilterOp::NotLike => {
            binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string()));
            format!("{column} NOT ILIKE ?")
        }
        FilterOp::In | FilterOp::NotIn => {
            let values = filter.value.as_list()?.to_vec();


 ... (clipped 59 lines)
DB resource exhaustion

Description: The stats query builder expands user-provided list filters into many placeholders without
bounding list size, allowing excessively large 'IN/NOT IN' clauses that may degrade
database performance or cause timeouts (potential DoS).
otel_metrics.rs [412-429]

Referred Code
            let values = filter.value.as_list()?.to_vec();
            if values.is_empty() {
                return Ok("1=1".into());
            }
            let mut placeholders = Vec::new();
            for value in values {
                placeholders.push("?".to_string());
                binds.push(SqlBindValue::Text(value));
            }
            let operator = if matches!(filter.op, FilterOp::In) {
                "IN"
            } else {
                "NOT IN"
            };
            Ok(format!("{column} {operator} ({})", placeholders.join(", ")))
        }
    }
}
Unbounded input size

Description: The 'ip_addresses' array filter permits arbitrarily large lists supplied by users and uses
array containment; without limits on list size or request validation, this can lead to
heavy query execution and indexing pressure resulting in DoS.
interfaces.rs [186-209]

Referred Code
}
"ip_addresses" | "ip_address" => {
    let values: Vec<String> = match &filter.value {
        FilterValue::Scalar(value) => vec![value.to_string()],
        FilterValue::List(list) => list.clone(),
    };
    if values.is_empty() {
        return Ok(query);
    }
    let expr = sql::<Bool>("coalesce(ip_addresses, ARRAY[]::text[]) @> ")
        .bind::<Array<Text>, _>(values);
    match filter.op {
        FilterOp::Eq | FilterOp::In => {
            query = query.filter(expr);
        }
        FilterOp::NotEq | FilterOp::NotIn => {
            query = query.filter(not(expr));
        }
        FilterOp::Like | FilterOp::NotLike => {
            return Err(ServiceError::InvalidRequest(
                "ip_addresses filter does not support pattern matching".into(),


 ... (clipped 3 lines)
Expensive aggregation DoS

Description: Stats mode parses user-supplied expressions and builds a single JSON payload but returns
it wrapped in a one-element array; while SQL is parameterized, complex stats strings and
lack of size limits on 'stats' input or filters can be leveraged to generate expensive
aggregation queries causing database load (DoS).
trace_summaries.rs [161-183]

Referred Code
if let Some(raw_stats) = plan.stats.as_ref().and_then(|s| {
    let trimmed = s.trim();
    if trimmed.is_empty() {
        None
    } else {
        Some(trimmed)
    }
}) {
    let stats = parse_stats(raw_stats)?;
    let (select_sql, mut stats_binds) = build_stats_select(&stats)?;
    sql.push_str("SELECT ");
    sql.push_str(&select_sql);
    sql.push_str(" AS payload\nFROM trace_summaries");
    binds.append(&mut stats_binds);
    let (where_sql, mut where_binds) = build_filters_clause(plan)?;
    sql.push_str(&where_sql);
    binds.append(&mut where_binds);
    return Ok(TraceSummarySql {
        sql,
        binds,
        mode: SummaryMode::Stats,


 ... (clipped 2 lines)
Unbounded time range

Description: Allowing open-ended absolute time ranges (start to 'now' or 'MIN_UTC' to end) without
server-side bounds can enable extremely large scans over time-series tables, risking heavy
DB load and timeouts.
time.rs [62-80]

Referred Code
        start: *start,
        end: *end,
    },
    TimeFilterSpec::AbsoluteOpenEnd { start } => {
        if *start > now {
            return Err(ServiceError::InvalidRequest(
                "time range start must be before end".to_string(),
            ));
        }
        TimeRange {
            start: *start,
            end: now,
        }
    }
    TimeFilterSpec::AbsoluteOpenStart { end } => TimeRange {
        start: DateTime::<Utc>::MIN_UTC,
        end: *end,
    },
};
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status:
Limited auditing: New query paths (Interfaces, Services, OtelMetrics, Traces, TraceSummaries) add database
actions without explicit audit logging of who performed queries or their outcomes, which
may be acceptable if auditing exists elsewhere.

Referred Code
pub async fn execute_query(&self, request: QueryRequest) -> Result<QueryResponse> {
    let ast = parser::parse(&request.query)?;
    let plan = self.plan(&request, ast)?;
    let mut conn = self.pool.get().await.map_err(|err| {
        error!(error = ?err, "failed to acquire database connection");
        ServiceError::Internal(anyhow::anyhow!("{err:?}"))
    })?;

    let results = match plan.entity {
        Entity::Devices => devices::execute(&mut conn, &plan).await?,
        Entity::Events => events::execute(&mut conn, &plan).await?,
        Entity::Interfaces => interfaces::execute(&mut conn, &plan).await?,
        Entity::Logs => logs::execute(&mut conn, &plan).await?,
        Entity::OtelMetrics => otel_metrics::execute(&mut conn, &plan).await?,
        Entity::Services => services::execute(&mut conn, &plan).await?,
        Entity::TraceSummaries => trace_summaries::execute(&mut conn, &plan).await?,
        Entity::Traces => traces::execute(&mut conn, &plan).await?,
    };

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Error context: Several parsing/validation errors return generic InvalidRequest messages without including
offending field/value, which may hinder debugging though may be intentional to avoid
sensitive exposure.

Referred Code

fn parse_i32(filter: &Filter) -> Result<i32> {
    filter
        .value
        .as_scalar()?
        .parse::<i32>()
        .map_err(|_| ServiceError::InvalidRequest("integer value required".into()))
}

fn parse_i64(filter: &Filter) -> Result<i64> {
    filter
        .value
        .as_scalar()?
        .parse::<i64>()
        .map_err(|_| ServiceError::InvalidRequest("numeric value required".into()))
}

fn parse_f64(filter: &Filter) -> Result<f64> {
    filter
        .value
        .as_scalar()?


 ... (clipped 3 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/1950#issuecomment-3545019641 Original created: 2025-11-18T04:48:58Z --- ## PR Compliance Guide 🔍 <!-- https://github.com/carverauto/serviceradar/commit/9d3aedb45f0a6bf9cb729acad03c096d1a5b33e5 --> #### (Compliance updated until commit https://github.com/carverauto/serviceradar/commit/9d3aedb45f0a6bf9cb729acad03c096d1a5b33e5) Below is a summary of compliance checks for this PR:<br> <table><tbody><tr><td colspan='2'><strong>Security Compliance</strong></td></tr> <tr><td rowspan=5>⚪</td> <td><details><summary><strong>Insecure auth fallback </strong></summary><br> <b>Description:</b> Authorization header is populated from the 'accessToken' cookie without HttpOnly/secure <br>validation, enabling bearer token theft via client-writable cookies or CSRF header <br>injection paths; ensure the cookie is HttpOnly/SameSite and only used in trusted contexts.<br> <br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-53eee72ee62c074b7ff0b7556920802805621e9e16bfa91f85675ec55b66bf32R86-R91'>route.ts [86-91]</a></strong><br> <details open><summary>Referred Code</summary> ```typescript const cookieToken = req.cookies.get("accessToken")?.value?.trim(); if (cookieToken) { return `Bearer ${cookieToken}`; } return null; ``` </details></details></td></tr> <tr><td><details><summary><strong>SQL injection risk </strong></summary><br> <b>Description:</b> Raw SQL is constructed via string concatenation and manual placeholder rewriting; although <br>values are bound, the dynamic JSON key assembly and column names for stats may permit SQL <br>injection if alias/field parsing is bypassed—verify strict validation and avoid <br>concatenating untrusted identifiers.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-f4d3b33667f2e79a6ebe4cfff931f93c728d9a81c305ac13586e623850a504dbR159-R168'>logs.rs [159-168]</a></strong><br> <details open><summary>Referred Code</summary> ```rust let mut sql = String::from("SELECT jsonb_build_object("); sql.push_str(&parts.join(", ")); sql.push_str(") AS payload\nFROM logs"); if !clauses.is_empty() { sql.push_str("\nWHERE "); sql.push_str(&clauses.join(" AND ")); } Ok(Some(LogsStatsSql { sql, binds })) } ``` </details></details></td></tr> <tr><td><details><summary><strong>SQL injection risk </strong></summary><br> <b>Description:</b> Stats SQL string concatenates response keys, group column names, and ORDER BY expressions; <br>while values are bound, concatenated identifiers increase SQL injection surface if parsing <br>is bypassed—confirm whitelist of group fields and aliases cannot contain unsafe <br>characters.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-8106bfa2099b8a6d945b338532f8b1108467e2f0592ddbd64d546fcbce3e3613R296-R323'>otel_metrics.rs [296-323]</a></strong><br> <details open><summary>Referred Code</summary> ```rust if let Some(group_field) = stats.group_field { let column = group_field.column(); sql.push_str(&format!( "jsonb_build_object('{}', {column}, '{}', COUNT(*)) AS payload", group_field.response_key(), stats.alias )); } else { sql.push_str(&format!( "jsonb_build_object('{}', COUNT(*)) AS payload", stats.alias )); } sql.push_str("\nFROM otel_metrics"); if !clauses.is_empty() { sql.push_str("\nWHERE "); sql.push_str(&clauses.join(" AND ")); } if stats.group_field.is_some() { let column = stats.group_field.unwrap().column(); ... (clipped 7 lines) ``` </details></details></td></tr> <tr><td><details><summary><strong>SQL injection risk </strong></summary><br> <b>Description:</b> Dynamic stats JSON is built by interpolating parsed aliases into SQL and aggregations; <br>despite parameter binding for values, alias-to-SQL insertion can allow injection if <br>validation misses edge cases—ensure aliases are strictly alphanumeric/underscore and <br>expressions are fully validated.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-ff98619eb5dc4d75205d1797477e9e2880f42081e8d0c61d6fd1d6e5926b72faR170-R185'>trace_summaries.rs [170-185]</a></strong><br> <details open><summary>Referred Code</summary> ```rust let stats = parse_stats(raw_stats)?; let (select_sql, mut stats_binds) = build_stats_select(&stats)?; sql.push_str("SELECT "); sql.push_str(&select_sql); sql.push_str(" AS payload\nFROM trace_summaries"); binds.append(&mut stats_binds); let (where_sql, mut where_binds) = build_filters_clause(plan)?; sql.push_str(&where_sql); binds.append(&mut where_binds); return Ok(TraceSummarySql { sql, binds, mode: SummaryMode::Stats, }); } ``` </details></details></td></tr> <tr><td><details><summary><strong>Sensitive data exposure </strong></summary><br> <b>Description:</b> The JSON output duplicates 'attributes' into both 'attributes' and 'raw_data' with <br>different sources, which can expose unfiltered trace span data unintentionally; verify <br>that sensitive attributes are sanitized before serialization.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-57c82b01f92e9bd40063d0b0178c12d452771ac133f2121fb0ac008b167da367R206-R229'>models.rs [206-229]</a></strong><br> <details open><summary>Referred Code</summary> ```rust pub fn into_json(self) -> serde_json::Value { serde_json::json!({ "timestamp": self.timestamp, "trace_id": self.trace_id, "span_id": self.span_id, "parent_span_id": self.parent_span_id, "name": self.name, "kind": self.kind, "start_time_unix_nano": self.start_time_unix_nano, "end_time_unix_nano": self.end_time_unix_nano, "service_name": self.service_name, "service_version": self.service_version, "service_instance": self.service_instance, "scope_name": self.scope_name, "scope_version": self.scope_version, "status_code": self.status_code, "status_message": self.status_message, "attributes": self.attributes.clone(), "resource_attributes": self.resource_attributes, "events": self.events, "links": self.links, ... (clipped 3 lines) ``` </details></details></td></tr> <tr><td colspan='2'><strong>Ticket Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary>🎫 <strong>No ticket provided </strong></summary> - [ ] Create ticket/issue <!-- /create_ticket --create_ticket=true --> </details></td></tr> <tr><td colspan='2'><strong>Codebase Duplication Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary><strong>Codebase context is not defined </strong></summary> Follow the <a href='https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/'>guide</a> to enable codebase context checks. </details></td></tr> <tr><td colspan='2'><strong>Custom Compliance</strong></td></tr> <tr><td rowspan=1>🟢</td><td> <details><summary><strong>Generic: Meaningful Naming and Self-Documenting Code</strong></summary><br> **Objective:** Ensure all identifiers clearly express their purpose and intent, making code <br>self-documenting<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td rowspan=5>⚪</td> <td><details> <summary><strong>Generic: Comprehensive Audit Trails</strong></summary><br> **Objective:** To create a detailed and reliable record of critical system actions for security analysis <br>and compliance.<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-2fc110913a9b9140116cb47742007161b4016b81cc0c325a2b635905c12a3811R80-R87'><strong>Error logging</strong></a>: The added error logging records failures, but it is unclear whether all critical actions <br>and their outcomes (e.g., auth decisions, query execution context) are consistently logged <br>across new handlers and stats paths.<br> <details open><summary>Referred Code</summary> ```rust match response { Ok(rows) => Ok(Json(rows)), Err(err) => { error!(error = ?err, "srql query failed"); Err(err) } } ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td><details> <summary><strong>Generic: Robust Error Handling and Edge Case Management</strong></summary><br> **Objective:** Ensure comprehensive error handling that provides meaningful context and graceful <br>degradation<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-ff98619eb5dc4d75205d1797477e9e2880f42081e8d0c61d6fd1d6e5926b72faR220-R355'><strong>Edge cases</strong></a>: Custom SQL builders perform parsing and binding but some branches return generic invalid <br>request errors and may not validate extremely large IN lists or pagination bounds, <br>requiring verification of limits and edge-case handling.<br> <details open><summary>Referred Code</summary> ```rust fn build_filters_clause(plan: &QueryPlan) -> Result<(String, Vec<SqlBindValue>)> { let mut clauses = Vec::new(); let mut binds = Vec::new(); for filter in &plan.filters { match filter.field.as_str() { "trace_id" => add_text_condition(&mut clauses, &mut binds, "trace_id", filter)?, "root_span_id" => add_text_condition(&mut clauses, &mut binds, "root_span_id", filter)?, "root_span_name" => { add_text_condition(&mut clauses, &mut binds, "root_span_name", filter)? } "root_service_name" => { add_text_condition(&mut clauses, &mut binds, "root_service_name", filter)? } "status_code" => add_int_condition(&mut clauses, &mut binds, "status_code", filter)?, "root_span_kind" => { add_int_condition(&mut clauses, &mut binds, "root_span_kind", filter)? } "span_count" => add_i64_condition(&mut clauses, &mut binds, "span_count", filter)?, "error_count" => add_i64_condition(&mut clauses, &mut binds, "error_count", filter)?, "duration_ms" => add_float_condition(&mut clauses, &mut binds, "duration_ms", filter)?, ... (clipped 115 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td><details> <summary><strong>Generic: Secure Error Handling</strong></summary><br> **Objective:** To prevent the leakage of sensitive system information through error messages while <br>providing sufficient detail for internal debugging.<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-ff98619eb5dc4d75205d1797477e9e2880f42081e8d0c61d6fd1d6e5926b72faR240-R706'><strong>User error detail</strong></a>: Several paths return specific validation messages (e.g., unsupported fields/expressions) <br>which may be user-facing via the API; confirm these are mapped to safe, generic client <br>errors while detailed info is only logged.<br> <details open><summary>Referred Code</summary> ```rust "duration_ms" => add_float_condition(&mut clauses, &mut binds, "duration_ms", filter)?, other => { return Err(ServiceError::InvalidRequest(format!( "unsupported filter field '{other}'" ))) } } } if clauses.is_empty() { Ok((String::new(), binds)) } else { Ok((format!("\nWHERE {}", clauses.join(" AND ")), binds)) } } fn add_text_condition( clauses: &mut Vec<String>, binds: &mut Vec<SqlBindValue>, column: &str, filter: &Filter, ... (clipped 446 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td><details> <summary><strong>Generic: Secure Logging Practices</strong></summary><br> **Objective:** To ensure logs are useful for debugging and auditing without exposing sensitive <br>information like PII, PHI, or cardholder data.<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-ff98619eb5dc4d75205d1797477e9e2880f42081e8d0c61d6fd1d6e5926b72faR31-R60'><strong>Bind logging</strong></a>: On query failure, code logs SQL text and bind values which could expose sensitive data <br>from filters; verify logs redact or omit sensitive bind contents.<br> <details open><summary>Referred Code</summary> ```rust error!( error = ?err, sql = %sql_debug, binds = ?binds_debug, "trace summaries data query failed" ); ServiceError::Internal(err.into()) })?; Ok(rows.into_iter().map(TraceSummaryRow::into_json).collect()) } SummaryMode::Stats => { let sql_debug = summary_query.sql.clone(); let binds_debug = summary_query.binds.clone(); let query = summary_query.to_boxed_query(); let rows: Vec<TraceStatsPayload> = query.load(conn).await.map_err(|err| { error!( error = ?err, sql = %sql_debug, binds = ?binds_debug, "trace summaries stats query failed" ); ... (clipped 9 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td><details> <summary><strong>Generic: Security-First Input Validation and Data Handling</strong></summary><br> **Objective:** Ensure all data inputs are validated, sanitized, and handled securely to prevent <br>vulnerabilities<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-53eee72ee62c074b7ff0b7556920802805621e9e16bfa91f85675ec55b66bf32R78-R92'><strong>Cookie auth</strong></a>: The new authorization resolver falls back to a Bearer token from the accessToken cookie; <br>confirm cookie security attributes and server-side validation to prevent token misuse.<br> <details open><summary>Referred Code</summary> ```typescript function resolveAuthorizationHeader(req: NextRequest): string | null { const header = req.headers.get("Authorization") || req.headers.get("authorization") || ""; const trimmed = header.trim(); if (trimmed) { return trimmed; } const cookieToken = req.cookies.get("accessToken")?.value?.trim(); if (cookieToken) { return `Bearer ${cookieToken}`; } return null; } ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td align="center" colspan="2"> <!-- placeholder --> <!-- /compliance --update_compliance=true --> </td></tr></tbody></table> <details><summary>Compliance status legend</summary> 🟢 - Fully Compliant<br> 🟡 - Partial Compliant<br> 🔴 - Not Compliant<br> ⚪ - Requires Further Human Verification<br> 🏷️ - Compliance label<br> </details> ___ #### Previous compliance checks <details> <summary>Compliance check up to commit <a href='https://github.com/carverauto/serviceradar/commit/5f5004f210628a707e9017d06a5cd901796086f3'>5f5004f</a></summary><br> <table><tbody><tr><td colspan='2'><strong>Security Compliance</strong></td></tr> <tr><td rowspan=6>⚪</td> <td><details><summary><strong>CSRF via cookie-to-bearer </strong></summary><br> <b>Description:</b> The new authorization resolution falls back to the 'accessToken' cookie and blindly <br>converts it to a 'Bearer' header, which may enable CSRF on the backend if the SRQL <br>endpoint trusts the Authorization header and CORS/CSRF protections are not enforced for <br>this internal call.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-53eee72ee62c074b7ff0b7556920802805621e9e16bfa91f85675ec55b66bf32R86-R91'>route.ts [86-91]</a></strong><br> <details open><summary>Referred Code</summary> ```typescript const cookieToken = req.cookies.get("accessToken")?.value?.trim(); if (cookieToken) { return `Bearer ${cookieToken}`; } return null; ``` </details></details></td></tr> <tr><td><details><summary><strong>DB resource exhaustion </strong></summary><br> <b>Description:</b> The dynamic SQL stats filters construct 'IN/NOT IN' lists by expanding multiple '?' <br>placeholders based on user-provided lists; while values are still bound safely, the <br>absence of a maximum list length or total placeholder cap can be abused to create very <br>large queries and cause database resource exhaustion (DoS).<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-f4d3b33667f2e79a6ebe4cfff931f93c728d9a81c305ac13586e623850a504dbR441-R520'>logs.rs [441-520]</a></strong><br> <details open><summary>Referred Code</summary> ```rust fn build_text_clause(column: &str, filter: &Filter) -> Result<Option<(String, Vec<SqlBindValue>)>> { let mut binds = Vec::new(); let clause = match filter.op { FilterOp::Eq => { binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string())); format!("{column} = ?") } FilterOp::NotEq => { binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string())); format!("{column} <> ?") } FilterOp::Like => { binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string())); format!("{column} ILIKE ?") } FilterOp::NotLike => { binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string())); format!("{column} NOT ILIKE ?") } FilterOp::In | FilterOp::NotIn => { let values = filter.value.as_list()?.to_vec(); ... (clipped 59 lines) ``` </details></details></td></tr> <tr><td><details><summary><strong>DB resource exhaustion </strong></summary><br> <b>Description:</b> The stats query builder expands user-provided list filters into many placeholders without <br>bounding list size, allowing excessively large 'IN/NOT IN' clauses that may degrade <br>database performance or cause timeouts (potential DoS).<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-8106bfa2099b8a6d945b338532f8b1108467e2f0592ddbd64d546fcbce3e3613R412-R429'>otel_metrics.rs [412-429]</a></strong><br> <details open><summary>Referred Code</summary> ```rust let values = filter.value.as_list()?.to_vec(); if values.is_empty() { return Ok("1=1".into()); } let mut placeholders = Vec::new(); for value in values { placeholders.push("?".to_string()); binds.push(SqlBindValue::Text(value)); } let operator = if matches!(filter.op, FilterOp::In) { "IN" } else { "NOT IN" }; Ok(format!("{column} {operator} ({})", placeholders.join(", "))) } } } ``` </details></details></td></tr> <tr><td><details><summary><strong>Unbounded input size </strong></summary><br> <b>Description:</b> The 'ip_addresses' array filter permits arbitrarily large lists supplied by users and uses <br>array containment; without limits on list size or request validation, this can lead to <br>heavy query execution and indexing pressure resulting in DoS.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-1ec833d4525fb2806523888b8c57b76ca8dd2ab70539368ccecc4d262769c8c5R186-R209'>interfaces.rs [186-209]</a></strong><br> <details open><summary>Referred Code</summary> ```rust } "ip_addresses" | "ip_address" => { let values: Vec<String> = match &filter.value { FilterValue::Scalar(value) => vec![value.to_string()], FilterValue::List(list) => list.clone(), }; if values.is_empty() { return Ok(query); } let expr = sql::<Bool>("coalesce(ip_addresses, ARRAY[]::text[]) @> ") .bind::<Array<Text>, _>(values); match filter.op { FilterOp::Eq | FilterOp::In => { query = query.filter(expr); } FilterOp::NotEq | FilterOp::NotIn => { query = query.filter(not(expr)); } FilterOp::Like | FilterOp::NotLike => { return Err(ServiceError::InvalidRequest( "ip_addresses filter does not support pattern matching".into(), ... (clipped 3 lines) ``` </details></details></td></tr> <tr><td><details><summary><strong>Expensive aggregation DoS </strong></summary><br> <b>Description:</b> Stats mode parses user-supplied expressions and builds a single JSON payload but returns <br>it wrapped in a one-element array; while SQL is parameterized, complex stats strings and <br>lack of size limits on 'stats' input or filters can be leveraged to generate expensive <br>aggregation queries causing database load (DoS).<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-ff98619eb5dc4d75205d1797477e9e2880f42081e8d0c61d6fd1d6e5926b72faR161-R183'>trace_summaries.rs [161-183]</a></strong><br> <details open><summary>Referred Code</summary> ```rust if let Some(raw_stats) = plan.stats.as_ref().and_then(|s| { let trimmed = s.trim(); if trimmed.is_empty() { None } else { Some(trimmed) } }) { let stats = parse_stats(raw_stats)?; let (select_sql, mut stats_binds) = build_stats_select(&stats)?; sql.push_str("SELECT "); sql.push_str(&select_sql); sql.push_str(" AS payload\nFROM trace_summaries"); binds.append(&mut stats_binds); let (where_sql, mut where_binds) = build_filters_clause(plan)?; sql.push_str(&where_sql); binds.append(&mut where_binds); return Ok(TraceSummarySql { sql, binds, mode: SummaryMode::Stats, ... (clipped 2 lines) ``` </details></details></td></tr> <tr><td><details><summary><strong>Unbounded time range </strong></summary><br> <b>Description:</b> Allowing open-ended absolute time ranges (start to 'now' or 'MIN_UTC' to end) without <br>server-side bounds can enable extremely large scans over time-series tables, risking heavy <br>DB load and timeouts.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-53912bc0c1e18ee038a27561478ebac74affd9d13a4fdf24e760a1694ea5b587R62-R80'>time.rs [62-80]</a></strong><br> <details open><summary>Referred Code</summary> ```rust start: *start, end: *end, }, TimeFilterSpec::AbsoluteOpenEnd { start } => { if *start > now { return Err(ServiceError::InvalidRequest( "time range start must be before end".to_string(), )); } TimeRange { start: *start, end: now, } } TimeFilterSpec::AbsoluteOpenStart { end } => TimeRange { start: DateTime::<Utc>::MIN_UTC, end: *end, }, }; ``` </details></details></td></tr> <tr><td colspan='2'><strong>Ticket Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary>🎫 <strong>No ticket provided </strong></summary> - [ ] Create ticket/issue <!-- /create_ticket --create_ticket=true --> </details></td></tr> <tr><td colspan='2'><strong>Codebase Duplication Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary><strong>Codebase context is not defined </strong></summary> Follow the <a href='https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/'>guide</a> to enable codebase context checks. </details></td></tr> <tr><td colspan='2'><strong>Custom Compliance</strong></td></tr> <tr><td rowspan=4>🟢</td><td> <details><summary><strong>Generic: Meaningful Naming and Self-Documenting Code</strong></summary><br> **Objective:** Ensure all identifiers clearly express their purpose and intent, making code <br>self-documenting<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td> <details><summary><strong>Generic: Secure Error Handling</strong></summary><br> **Objective:** To prevent the leakage of sensitive system information through error messages while <br>providing sufficient detail for internal debugging.<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td> <details><summary><strong>Generic: Secure Logging Practices</strong></summary><br> **Objective:** To ensure logs are useful for debugging and auditing without exposing sensitive <br>information like PII, PHI, or cardholder data.<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td> <details><summary><strong>Generic: Security-First Input Validation and Data Handling</strong></summary><br> **Objective:** Ensure all data inputs are validated, sanitized, and handled securely to prevent <br>vulnerabilities<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td rowspan=2>⚪</td> <td><details> <summary><strong>Generic: Comprehensive Audit Trails</strong></summary><br> **Objective:** To create a detailed and reliable record of critical system actions for security analysis <br>and compliance.<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-393e3fa3d0e41741834cd7cd398a06111ab7b141ae6caca7a5dcc0e036172491R120-R137'><strong>Limited auditing</strong></a>: New query paths (Interfaces, Services, OtelMetrics, Traces, TraceSummaries) add database <br>actions without explicit audit logging of who performed queries or their outcomes, which <br>may be acceptable if auditing exists elsewhere.<br> <details open><summary>Referred Code</summary> ```rust pub async fn execute_query(&self, request: QueryRequest) -> Result<QueryResponse> { let ast = parser::parse(&request.query)?; let plan = self.plan(&request, ast)?; let mut conn = self.pool.get().await.map_err(|err| { error!(error = ?err, "failed to acquire database connection"); ServiceError::Internal(anyhow::anyhow!("{err:?}")) })?; let results = match plan.entity { Entity::Devices => devices::execute(&mut conn, &plan).await?, Entity::Events => events::execute(&mut conn, &plan).await?, Entity::Interfaces => interfaces::execute(&mut conn, &plan).await?, Entity::Logs => logs::execute(&mut conn, &plan).await?, Entity::OtelMetrics => otel_metrics::execute(&mut conn, &plan).await?, Entity::Services => services::execute(&mut conn, &plan).await?, Entity::TraceSummaries => trace_summaries::execute(&mut conn, &plan).await?, Entity::Traces => traces::execute(&mut conn, &plan).await?, }; ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td><details> <summary><strong>Generic: Robust Error Handling and Edge Case Management</strong></summary><br> **Objective:** Ensure comprehensive error handling that provides meaningful context and graceful <br>degradation<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1950/files#diff-ff98619eb5dc4d75205d1797477e9e2880f42081e8d0c61d6fd1d6e5926b72faR368-R391'><strong>Error context</strong></a>: Several parsing/validation errors return generic InvalidRequest messages without including <br>offending field/value, which may hinder debugging though may be intentional to avoid <br>sensitive exposure.<br> <details open><summary>Referred Code</summary> ```rust fn parse_i32(filter: &Filter) -> Result<i32> { filter .value .as_scalar()? .parse::<i32>() .map_err(|_| ServiceError::InvalidRequest("integer value required".into())) } fn parse_i64(filter: &Filter) -> Result<i64> { filter .value .as_scalar()? .parse::<i64>() .map_err(|_| ServiceError::InvalidRequest("numeric value required".into())) } fn parse_f64(filter: &Filter) -> Result<f64> { filter .value .as_scalar()? ... (clipped 3 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td align="center" colspan="2"> <!-- /compliance --update_compliance=true --> </td></tr></tbody></table> </details>
qodo-code-review[bot] commented 2025-11-18 04:50:10 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/1950#issuecomment-3545021973
Original created: 2025-11-18T04:50:10Z

PR Code Suggestions

Latest suggestions up to 9d3aedb

CategorySuggestion                                                                                                                                    Impact
Security
Redact DATABASE_URL in logs

Redact the database credentials from the SRQL_DATABASE_URL before logging it to
prevent leaking secrets.

docker/compose/entrypoint-srql.sh [151-152]

 echo "SRQL listening on ${SRQL_LISTEN_HOST}:${SRQL_LISTEN_PORT} (database target: ${DB_TARGET_DESC})"
-echo "SRQL DATABASE_URL=${SRQL_DATABASE_URL}"
+# Redact password/user in logs to avoid credential leakage
+if [ -n "${SRQL_DATABASE_URL}" ]; then
+  REDACTED_URL="$(printf '%s' "${SRQL_DATABASE_URL}" | sed -E 's#(postgres(ql)?://)([^:@/]+)(:([^@/]*))?@#\1****:****@#')"
+  echo "SRQL DATABASE_URL=${REDACTED_URL}"
+fi
  • Apply / Chat
Suggestion importance[1-10]: 9

__

Why: This suggestion correctly identifies a critical security vulnerability where database credentials are leaked into logs, and the proposed fix effectively redacts the sensitive information.

High
Validate Bearer auth scheme

Validate the Authorization header to ensure it uses the Bearer scheme. Reject
any other format to avoid forwarding potentially invalid or malicious headers.

web/src/app/api/query/route.ts [78-92]

 function resolveAuthorizationHeader(req: NextRequest): string | null {
-  const header =
-    req.headers.get("Authorization") || req.headers.get("authorization") || "";
-  const trimmed = header.trim();
+  const raw = req.headers.get("authorization") ?? req.headers.get("Authorization") ?? "";
+  const trimmed = raw.trim();
   if (trimmed) {
-    return trimmed;
+    // Only allow Bearer scheme
+    const m = /^Bearer\s+([A-Za-z0-9._\-~+/=]+)$/.exec(trimmed);
+    if (m) {
+      return `Bearer ${m[1]}`;
+    }
+    // Any non-empty but non-Bearer auth is rejected instead of forwarded
+    return null;
   }
 
   const cookieToken = req.cookies.get("accessToken")?.value?.trim();
   if (cookieToken) {
     return `Bearer ${cookieToken}`;
   }
 
   return null;
 }
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: This is a valuable security hardening suggestion that ensures only Bearer tokens are forwarded, preventing potential issues with malformed or unexpected Authorization header schemes in the upstream service.

Medium
Possible issue
Correct empty IN list handling

Correct the handling of empty IN and NOT IN filter lists. An empty IN list
should result in a 1=0 (always false) condition, while an empty NOT IN list
should result in 1=1 (always true).

rust/srql/src/query/otel_metrics.rs [389-429]

 fn build_text_clause(
     column: &str,
     filter: &Filter,
     binds: &mut Vec<SqlBindValue>,
 ) -> Result<String> {
     match filter.op {
         FilterOp::Eq => {
             binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string()));
             Ok(format!("{column} = ?"))
         }
         FilterOp::NotEq => {
             binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string()));
             Ok(format!("{column} <> ?"))
         }
         FilterOp::Like => {
             binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string()));
             Ok(format!("{column} ILIKE ?"))
         }
         FilterOp::NotLike => {
             binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string()));
             Ok(format!("{column} NOT ILIKE ?"))
         }
         FilterOp::In | FilterOp::NotIn => {
             let values = filter.value.as_list()?.to_vec();
             if values.is_empty() {
-                return Ok("1=1".into());
+                // Empty IN should match nothing; empty NOT IN should match everything
+                return Ok(match filter.op {
+                    FilterOp::In => "1=0".into(),
+                    FilterOp::NotIn => "1=1".into(),
+                    _ => unreachable!(),
+                });
             }
             let mut placeholders = Vec::new();
             for value in values {
                 placeholders.push("?".to_string());
                 binds.push(SqlBindValue::Text(value));
             }
             let operator = if matches!(filter.op, FilterOp::In) {
                 "IN"
             } else {
                 "NOT IN"
             };
             Ok(format!("{column} {operator} ({})", placeholders.join(", ")))
         }
     }
 }
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a logical bug where an empty IN clause resolves to 1=1, effectively ignoring the filter. This could lead to unintended data exposure or incorrect query results, making it a significant correctness fix.

Medium
Harden TLS root cert loading

Improve the robustness of TLS root certificate loading by validating that at
least one certificate is loaded, using a more appropriate API for parsing, and
providing clearer error messages for failures.

rust/srql/src/db.rs [93-106]

 fn build_tls_connector(path: &str) -> Result<MakeRustlsConnect> {
-    let mut reader = BufReader::new(File::open(path).context("failed to open PGSSLROOTCERT")?);
+    let f = File::open(path).with_context(|| format!("failed to open PGSSLROOTCERT at {path}"))?;
+    let mut reader = BufReader::new(f);
+
     let mut root_store = RootCertStore::empty();
-    for cert in certs(&mut reader) {
-        let cert = cert.context("failed to parse PGSSLROOTCERT")?;
-        root_store
-            .add(cert)
-            .map_err(|_| anyhow::anyhow!("invalid certificate in PGSSLROOTCERT"))?;
+    let mut collected: Vec<Vec<u8>> = Vec::new();
+    while let Some(item) = certs(&mut reader).transpose() {
+        let der = item.context("failed to parse certificate from PGSSLROOTCERT")?;
+        collected.push(der);
     }
+
+    if collected.is_empty() {
+        anyhow::bail!("PGSSLROOTCERT contains no certificates");
+    }
+
+    // Convert to the expected type and add; count how many were accepted.
+    let certs_der: Vec<rustls::pki_types::CertificateDer<'static>> = collected
+        .into_iter()
+        .map(|b| rustls::pki_types::CertificateDer::from(b.into()))
+        .collect();
+
+    let accepted = root_store.add_parsable_certificates(certs_der);
+
+    if accepted == 0 {
+        anyhow::bail!("no valid certificates could be loaded from PGSSLROOTCERT");
+    }
+
     let config = ClientConfig::builder()
         .with_root_certificates(root_store)
         .with_no_client_auth();
+
     Ok(MakeRustlsConnect::new(config))
 }
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies a lack of robustness in the new TLS certificate loading logic and provides a more resilient implementation with clearer error handling, which is important for production stability.

Medium
Avoid MIN_UTC fallback and clarify errors

Refine the AbsoluteOpenStart time range logic to provide a more specific error
message instead of falling back to MIN_UTC, which can cause a generic range
length error.

rust/srql/src/time.rs [30-83]

 impl TimeFilterSpec {
     pub fn resolve(&self, now: DateTime<Utc>) -> Result<TimeRange> {
         let max_duration = Duration::days(MAX_TIME_RANGE_DAYS);
         let range = match self {
             TimeFilterSpec::RelativeHours(hours) => TimeRange {
                 start: now - Duration::hours(*hours),
                 end: now,
             },
-            ...
+            // ... other variants unchanged ...
             TimeFilterSpec::AbsoluteOpenEnd { start } => {
                 if *start > now {
                     return Err(ServiceError::InvalidRequest(
                         "time range start must be before end".to_string(),
                     ));
                 }
-                TimeRange {
-                    start: *start,
-                    end: now,
-                }
+                TimeRange { start: *start, end: now }
             }
             TimeFilterSpec::AbsoluteOpenStart { end } => {
-                let start = end
-                    .checked_sub_signed(max_duration)
-                    .unwrap_or(DateTime::<Utc>::MIN_UTC);
+                // Ensure we can honor the max window without underflowing and provide a clear error.
+                if now < *end {
+                    return Err(ServiceError::InvalidRequest(
+                        "time range end must be before now".to_string(),
+                    ));
+                }
+                let earliest_allowed = *end - max_duration;
+                let start = earliest_allowed;
                 TimeRange { start, end: *end }
             }
         };
 
+        if range.start > range.end {
+            return Err(ServiceError::InvalidRequest(
+                "time range start must be before end".to_string(),
+            ));
+        }
+        let span = range.end.signed_duration_since(range.start);
+        if span > max_duration {
+            return Err(ServiceError::InvalidRequest(format!(
+                "time range cannot exceed {MAX_TIME_RANGE_DAYS} days"
+            )));
+        }
+        Ok(range)
+    }
+}
+

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies an edge case in the new time range parsing logic that could lead to confusing error messages and improves the code by handling it explicitly.

Low
Incremental [*]
Fix NULL-safe NOT IN semantics

Modify the SQL generation for NotEq, NotLike, and NotIn text filters to be
NULL-safe by including an IS NULL check, preventing rows with NULL values from
being unintentionally filtered out.

rust/srql/src/query/trace_summaries.rs [256-303]

 fn add_text_condition(
     clauses: &mut Vec<String>,
     binds: &mut Vec<SqlBindValue>,
     column: &str,
     filter: &Filter,
 ) -> Result<()> {
     match filter.op {
         FilterOp::Eq => {
             let value = filter.value.as_scalar()?.to_string();
             clauses.push(format!("{column} = ?"));
             binds.push(SqlBindValue::Text(value));
         }
         FilterOp::NotEq => {
             let value = filter.value.as_scalar()?.to_string();
-            clauses.push(format!("{column} <> ?"));
+            clauses.push(format!("({column} IS NULL OR {column} <> ?)"));
             binds.push(SqlBindValue::Text(value));
         }
         FilterOp::Like => {
             let value = filter.value.as_scalar()?.to_string();
             clauses.push(format!("{column} ILIKE ?"));
             binds.push(SqlBindValue::Text(value));
         }
         FilterOp::NotLike => {
             let value = filter.value.as_scalar()?.to_string();
-            clauses.push(format!("{column} NOT ILIKE ?"));
+            clauses.push(format!("({column} IS NULL OR {column} NOT ILIKE ?)"));
             binds.push(SqlBindValue::Text(value));
         }
         FilterOp::In => {
             let values = filter.value.as_list()?.to_vec();
             if values.is_empty() {
                 clauses.push("1=0".to_string());
                 return Ok(());
             }
             clauses.push(format!("{column} = ANY(?)"));
             binds.push(SqlBindValue::TextArray(values));
         }
         FilterOp::NotIn => {
             let values = filter.value.as_list()?.to_vec();
             if values.is_empty() {
                 return Ok(());
             }
-            clauses.push(format!("{column} <> ALL(?)"));
+            clauses.push(format!("({column} IS NULL OR {column} <> ALL(?))"));
             binds.push(SqlBindValue::TextArray(values));
         }
     }
 
     Ok(())
 }
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies that NOT IN and <> operators in SQL can produce unexpected results with NULL values, and proposes a valid fix to make these comparisons NULL-safe, which improves query correctness.

Medium
General
Remove conflicting duplicate field

Remove the redundant raw_data field from the JSON output of
TraceSpanRow::into_json. The attributes field already provides the same data.

rust/srql/src/models.rs [205-230]

 impl TraceSpanRow {
     pub fn into_json(self) -> serde_json::Value {
         serde_json::json!({
             "timestamp": self.timestamp,
             "trace_id": self.trace_id,
             "span_id": self.span_id,
             "parent_span_id": self.parent_span_id,
             "name": self.name,
             "kind": self.kind,
             "start_time_unix_nano": self.start_time_unix_nano,
             "end_time_unix_nano": self.end_time_unix_nano,
             "service_name": self.service_name,
             "service_version": self.service_version,
             "service_instance": self.service_instance,
             "scope_name": self.scope_name,
             "scope_version": self.scope_version,
             "status_code": self.status_code,
             "status_message": self.status_message,
-            "attributes": self.attributes.clone(),
+            "attributes": self.attributes,
             "resource_attributes": self.resource_attributes,
             "events": self.events,
-            "links": self.links,
-            "raw_data": self.attributes.unwrap_or_default(),
+            "links": self.links
         })
     }
 }
  • Apply / Chat
Suggestion importance[1-10]: 5

__

Why: The suggestion correctly points out that the raw_data field is a redundant and potentially confusing duplication of the attributes field, improving the clarity and consistency of the JSON output.

Low
  • More

Previous suggestions

Suggestions up to commit 5f5004f
CategorySuggestion                                                                                                                                    Impact
High-level
Unify query generation using Diesel

Standardize query generation by using the Diesel query builder for all queries,
including statistics. This will eliminate inconsistent raw SQL string
manipulation and duplicated logic across different query handlers.

Examples:

rust/srql/src/query/trace_summaries.rs [130-202]
fn build_summary_query(plan: &QueryPlan) -> Result<TraceSummarySql> {
    let mut sql = String::from("WITH trace_summaries AS (\n");
    sql.push_str(
        "    SELECT\n        trace_id,\n        max(timestamp) AS timestamp,\n        max(span_id) FILTER (WHERE ",
    );
    sql.push_str(ROOT_PREDICATE);
    sql.push_str(") AS root_span_id,\n        max(name) FILTER (WHERE ");
    sql.push_str(ROOT_PREDICATE);
    sql.push_str(") AS root_span_name,\n        max(service_name) FILTER (WHERE ");
    sql.push_str(ROOT_PREDICATE);

 ... (clipped 63 lines)
rust/srql/src/query/logs.rs [122-166]
fn build_stats_query(plan: &QueryPlan) -> Result<Option<LogsStatsSql>> {
    let stats_raw = match plan.stats.as_ref() {
        Some(raw) if !raw.trim().is_empty() => raw.trim(),
        _ => return Ok(None),
    };

    let expressions = parse_stats_expressions(stats_raw)?;
    if expressions.is_empty() {
        return Err(ServiceError::InvalidRequest(
            "stats expression required for logs queries".into(),

 ... (clipped 35 lines)

Solution Walkthrough:

Before:

// In rust/srql/src/query/trace_summaries.rs
fn build_summary_query(plan: &QueryPlan) -> Result<TraceSummarySql> {
    let mut sql = String::from("WITH trace_summaries AS (...)");
    // ... manual SQL string building ...
    if plan.stats.is_some() {
        sql.push_str("SELECT jsonb_build_object(...) AS payload FROM trace_summaries");
        // ... more manual SQL string building ...
    }
    // ...
    let mut binds = Vec::new(); // Manual bind value collection
    // ...
    Ok(TraceSummarySql { sql, binds, ... })
}

fn rewrite_placeholders(sql: &str) -> String { /* ... converts ? to $1, $2 ... */ }

After:

// In a shared query utility module
pub trait StatsQueryBuilder {
    fn build_stats_query<'a>(plan: &'a QueryPlan) -> Result<BoxedSqlQuery<'a, ...>>;
}

// In rust/srql/src/query/trace_summaries.rs
impl StatsQueryBuilder for TraceSummaries {
    fn build_stats_query<'a>(plan: &'a QueryPlan) -> Result<BoxedSqlQuery<'a, ...>> {
        let mut query = sql_query("SELECT jsonb_build_object(...) FROM (...) AS trace_summaries")
            .into_boxed();

        // Use Diesel's .bind() for all parameters
        for expr in plan.stats_expressions {
            query = query.bind::<Text, _>(expr.alias);
            // ... bind other parts of the expression
        }
        // ...
        Ok(query)
    }
}

Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies a major design flaw where multiple query handlers use inconsistent and duplicated raw SQL string building, which increases maintenance overhead and risk, making it a critical issue to address.

High
Possible issue
Fix incorrect multi-byte character encoding

Refactor the urlencode function to correctly handle multi-byte characters by
iterating over bytes instead of characters, which prevents incorrect encoding.

docker/compose/entrypoint-srql.sh [20-43]

 urlencode() {
     # Percent-encode arbitrary bytes (safe for UTF-8 secrets)
-    local input="$1"
-    local out=""
-    local i=1
-    local len char hex
-    LC_ALL=C
-    len=$(printf '%s' "$input" | wc -c | tr -d '[:space:]')
-    while [ "$i" -le "$len" ]; do
-        char=$(printf '%s' "$input" | cut -c "$i")
-        case "$char" in
-            [a-zA-Z0-9.~_-])
-                out="${out}${char}"
-                ;;
-            *)
-                hex=$(printf '%s' "$char" | od -An -tx1 | head -n 1 | tr -d ' \n')
-                hex=$(printf '%s' "$hex" | tr 'a-f' 'A-F')
-                out="${out}%${hex}"
-                ;;
+    local data_hex out byte
+    data_hex=$(printf '%s' "$1" | od -An -tx1)
+    out=""
+    for byte in $data_hex; do
+        case "$byte" in
+            # 0-9
+            30|31|32|33|34|35|36|37|38|39) out="${out}$(printf "\\x$byte")" ;;
+            # A-Z
+            41|42|43|44|45|46|47|48|49|4a|4b|4c|4d|4e|4f|50|51|52|53|54|55|56|57|58|59|5a) out="${out}$(printf "\\x$byte")" ;;
+            # a-z
+            61|62|63|64|65|66|67|68|69|6a|6b|6c|6d|6e|6f|70|71|72|73|74|75|76|77|78|79|7a) out="${out}$(printf "\\x$byte")" ;;
+            # .~_-
+            2d|2e|5f|7e) out="${out}$(printf "\\x$byte")" ;;
+            *) out="${out}%$(printf '%s' "$byte" | tr 'a-f' 'A-F')" ;;
         esac
-        i=$((i + 1))
     done
     printf '%s' "$out"
 }
Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies a critical bug in the new urlencode implementation that breaks encoding for multi-byte characters and provides a robust fix.

High
Correctly handle empty IN filter
Suggestion Impact:The commit adds clauses.push("1=0".to_string()); when the IN filter values are empty, implementing the suggested behavior.

code diff:

@@ -282,6 +282,7 @@
         FilterOp::In => {
             let values = filter.value.as_list()?.to_vec();
             if values.is_empty() {
+                clauses.push("1=0".to_string());
                 return Ok(());
             }
             clauses.push(format!("{column} = ANY(?)"));

Handle an empty IN filter by adding an always-false condition like 1=0 to the
query, ensuring it correctly returns no results.

rust/srql/src/query/trace_summaries.rs [282-289]

 fn add_text_condition(
     clauses: &mut Vec<String>,
     binds: &mut Vec<SqlBindValue>,
     column: &str,
     filter: &Filter,
 ) -> Result<()> {
     match filter.op {
         ...
         FilterOp::In => {
             let values = filter.value.as_list()?.to_vec();
             if values.is_empty() {
+                clauses.push("1=0".to_string());
                 return Ok(());
             }
             clauses.push(format!("{column} = ANY(?)"));
             binds.push(SqlBindValue::TextArray(values));
         }
         ...
     }
 
     Ok(())
 }
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies a bug where an empty IN clause is ignored, leading to incorrect query results, and proposes a valid fix.

Medium
Security
Avoid hardcoding sensitive API keys
Suggestion Impact:The commit removed the hardcoded "x-api-key" from headers and added logic in the setup script to inject the API key dynamically into logging.otel.headers from a secret-derived variable, avoiding hardcoding in the configmap.

code diff:

@@ -238,9 +238,7 @@
             "service_name": "serviceradar-core",
             "batch_timeout": "5s",
             "insecure": false,
-            "headers": {
-              "x-api-key": "your-collector-api-key"
-            },
+            "headers": {},
             "tls": {
               "cert_file": "/etc/serviceradar/certs/core.pem",
               "key_file": "/etc/serviceradar/certs/core-key.pem",
@@ -338,9 +336,7 @@
             "service_name": "serviceradar-poller",
             "batch_timeout": "5s",
             "insecure": false,
-            "headers": {
-              "x-api-key": "your-collector-api-key"
-            },
+            "headers": {},
             "tls": {
               "cert_file": "/etc/serviceradar/certs/poller.pem",
               "key_file": "/etc/serviceradar/certs/poller-key.pem",
@@ -954,11 +950,15 @@
               | .cnpg.cert_dir = $certdir
               | .auth.jwt_secret = $jwt
               | .auth.jwt_expiration = "24h"
-              | .auth.local_users.admin = $bcrypt
-              | .api_key = $api_key
-              | (if .srql != null then (.srql.api_key = $api_key) else . end)
-              | .edge_onboarding.encryption_key = $edge_key' \
-             "$CONFIG_PATH" > "$TMP_CONFIG"
+             | .auth.local_users.admin = $bcrypt
+             | .api_key = $api_key
+             | (if .srql != null then (.srql.api_key = $api_key) else . end)
+             | (if (.logging? // null) != null and (.logging.otel? // null) != null
+                then (.logging.otel.headers = ((.logging.otel.headers // {}) + {"x-api-key": $api_key}))
+                else .
+               end)
+             | .edge_onboarding.encryption_key = $edge_key' \
+            "$CONFIG_PATH" > "$TMP_CONFIG"

Replace the hardcoded OpenTelemetry API key in configmap.yaml with a reference
to a Kubernetes secret to avoid security risks.

k8s/demo/base/configmap.yaml [241-243]

 "headers": {
-  "x-api-key": "your-collector-api-key"
+  "x-api-key": {
+    "valueFrom": {
+      "secretKeyRef": {
+        "name": "otel-collector-secret",
+        "key": "api-key"
+      }
+    }
+  }
 },

[Suggestion processed]

Suggestion importance[1-10]: 8

__

Why: The suggestion addresses a security risk by advising against a hardcoded placeholder API key and correctly recommends using a Kubernetes secret instead.

Medium
Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/1950#issuecomment-3545021973 Original created: 2025-11-18T04:50:10Z --- ## PR Code Suggestions ✨ <!-- 9d3aedb --> Latest suggestions up to 9d3aedb <table><thead><tr><td><strong>Category</strong></td><td align=left><strong>Suggestion&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </strong></td><td align=center><strong>Impact</strong></td></tr><tbody><tr><td rowspan=2>Security</td> <td> <details><summary>Redact DATABASE_URL in logs</summary> ___ **Redact the database credentials from the <code>SRQL_DATABASE_URL</code> before logging it to <br>prevent leaking secrets.** [docker/compose/entrypoint-srql.sh [151-152]](https://github.com/carverauto/serviceradar/pull/1950/files#diff-16cf075815de9081558fd4ee53d1138c488cd8756b43259be4a82c86dac514acR151-R152) ```diff echo "SRQL listening on ${SRQL_LISTEN_HOST}:${SRQL_LISTEN_PORT} (database target: ${DB_TARGET_DESC})" -echo "SRQL DATABASE_URL=${SRQL_DATABASE_URL}" +# Redact password/user in logs to avoid credential leakage +if [ -n "${SRQL_DATABASE_URL}" ]; then + REDACTED_URL="$(printf '%s' "${SRQL_DATABASE_URL}" | sed -E 's#(postgres(ql)?://)([^:@/]+)(:([^@/]*))?@#\1****:****@#')" + echo "SRQL DATABASE_URL=${REDACTED_URL}" +fi ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=0 --> <details><summary>Suggestion importance[1-10]: 9</summary> __ Why: This suggestion correctly identifies a critical security vulnerability where database credentials are leaked into logs, and the proposed fix effectively redacts the sensitive information. </details></details></td><td align=center>High </td></tr><tr><td> <details><summary>Validate Bearer auth scheme</summary> ___ **Validate the <code>Authorization</code> header to ensure it uses the <code>Bearer</code> scheme. Reject <br>any other format to avoid forwarding potentially invalid or malicious headers.** [web/src/app/api/query/route.ts [78-92]](https://github.com/carverauto/serviceradar/pull/1950/files#diff-53eee72ee62c074b7ff0b7556920802805621e9e16bfa91f85675ec55b66bf32R78-R92) ```diff function resolveAuthorizationHeader(req: NextRequest): string | null { - const header = - req.headers.get("Authorization") || req.headers.get("authorization") || ""; - const trimmed = header.trim(); + const raw = req.headers.get("authorization") ?? req.headers.get("Authorization") ?? ""; + const trimmed = raw.trim(); if (trimmed) { - return trimmed; + // Only allow Bearer scheme + const m = /^Bearer\s+([A-Za-z0-9._\-~+/=]+)$/.exec(trimmed); + if (m) { + return `Bearer ${m[1]}`; + } + // Any non-empty but non-Bearer auth is rejected instead of forwarded + return null; } const cookieToken = req.cookies.get("accessToken")?.value?.trim(); if (cookieToken) { return `Bearer ${cookieToken}`; } return null; } ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=1 --> <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: This is a valuable security hardening suggestion that ensures only `Bearer` tokens are forwarded, preventing potential issues with malformed or unexpected `Authorization` header schemes in the upstream service. </details></details></td><td align=center>Medium </td></tr><tr><td rowspan=3>Possible issue</td> <td> <details><summary>Correct empty IN list handling</summary> ___ **Correct the handling of empty <code>IN</code> and <code>NOT IN</code> filter lists. An empty <code>IN</code> list <br>should result in a <code>1=0</code> (always false) condition, while an empty <code>NOT IN</code> list <br>should result in <code>1=1</code> (always true).** [rust/srql/src/query/otel_metrics.rs [389-429]](https://github.com/carverauto/serviceradar/pull/1950/files#diff-8106bfa2099b8a6d945b338532f8b1108467e2f0592ddbd64d546fcbce3e3613R389-R429) ```diff fn build_text_clause( column: &str, filter: &Filter, binds: &mut Vec<SqlBindValue>, ) -> Result<String> { match filter.op { FilterOp::Eq => { binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string())); Ok(format!("{column} = ?")) } FilterOp::NotEq => { binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string())); Ok(format!("{column} <> ?")) } FilterOp::Like => { binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string())); Ok(format!("{column} ILIKE ?")) } FilterOp::NotLike => { binds.push(SqlBindValue::Text(filter.value.as_scalar()?.to_string())); Ok(format!("{column} NOT ILIKE ?")) } FilterOp::In | FilterOp::NotIn => { let values = filter.value.as_list()?.to_vec(); if values.is_empty() { - return Ok("1=1".into()); + // Empty IN should match nothing; empty NOT IN should match everything + return Ok(match filter.op { + FilterOp::In => "1=0".into(), + FilterOp::NotIn => "1=1".into(), + _ => unreachable!(), + }); } let mut placeholders = Vec::new(); for value in values { placeholders.push("?".to_string()); binds.push(SqlBindValue::Text(value)); } let operator = if matches!(filter.op, FilterOp::In) { "IN" } else { "NOT IN" }; Ok(format!("{column} {operator} ({})", placeholders.join(", "))) } } } ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=2 --> <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: The suggestion correctly identifies a logical bug where an empty `IN` clause resolves to `1=1`, effectively ignoring the filter. This could lead to unintended data exposure or incorrect query results, making it a significant correctness fix. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Harden TLS root cert loading</summary> ___ **Improve the robustness of TLS root certificate loading by validating that at <br>least one certificate is loaded, using a more appropriate API for parsing, and <br>providing clearer error messages for failures.** [rust/srql/src/db.rs [93-106]](https://github.com/carverauto/serviceradar/pull/1950/files#diff-8d5a0c48024aa12a3e06de065ca71e940d2b5007fbc34ef87471b90d7937891cR93-R106) ```diff fn build_tls_connector(path: &str) -> Result<MakeRustlsConnect> { - let mut reader = BufReader::new(File::open(path).context("failed to open PGSSLROOTCERT")?); + let f = File::open(path).with_context(|| format!("failed to open PGSSLROOTCERT at {path}"))?; + let mut reader = BufReader::new(f); + let mut root_store = RootCertStore::empty(); - for cert in certs(&mut reader) { - let cert = cert.context("failed to parse PGSSLROOTCERT")?; - root_store - .add(cert) - .map_err(|_| anyhow::anyhow!("invalid certificate in PGSSLROOTCERT"))?; + let mut collected: Vec<Vec<u8>> = Vec::new(); + while let Some(item) = certs(&mut reader).transpose() { + let der = item.context("failed to parse certificate from PGSSLROOTCERT")?; + collected.push(der); } + + if collected.is_empty() { + anyhow::bail!("PGSSLROOTCERT contains no certificates"); + } + + // Convert to the expected type and add; count how many were accepted. + let certs_der: Vec<rustls::pki_types::CertificateDer<'static>> = collected + .into_iter() + .map(|b| rustls::pki_types::CertificateDer::from(b.into())) + .collect(); + + let accepted = root_store.add_parsable_certificates(certs_der); + + if accepted == 0 { + anyhow::bail!("no valid certificates could be loaded from PGSSLROOTCERT"); + } + let config = ClientConfig::builder() .with_root_certificates(root_store) .with_no_client_auth(); + Ok(MakeRustlsConnect::new(config)) } ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=3 --> <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly identifies a lack of robustness in the new TLS certificate loading logic and provides a more resilient implementation with clearer error handling, which is important for production stability. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Avoid MIN_UTC fallback and clarify errors</summary> ___ **Refine the <code>AbsoluteOpenStart</code> time range logic to provide a more specific error <br>message instead of falling back to <code>MIN_UTC</code>, which can cause a generic range <br>length error.** [rust/srql/src/time.rs [30-83]](https://github.com/carverauto/serviceradar/pull/1950/files#diff-53912bc0c1e18ee038a27561478ebac74affd9d13a4fdf24e760a1694ea5b587R30-R83) ```diff impl TimeFilterSpec { pub fn resolve(&self, now: DateTime<Utc>) -> Result<TimeRange> { let max_duration = Duration::days(MAX_TIME_RANGE_DAYS); let range = match self { TimeFilterSpec::RelativeHours(hours) => TimeRange { start: now - Duration::hours(*hours), end: now, }, - ... + // ... other variants unchanged ... TimeFilterSpec::AbsoluteOpenEnd { start } => { if *start > now { return Err(ServiceError::InvalidRequest( "time range start must be before end".to_string(), )); } - TimeRange { - start: *start, - end: now, - } + TimeRange { start: *start, end: now } } TimeFilterSpec::AbsoluteOpenStart { end } => { - let start = end - .checked_sub_signed(max_duration) - .unwrap_or(DateTime::<Utc>::MIN_UTC); + // Ensure we can honor the max window without underflowing and provide a clear error. + if now < *end { + return Err(ServiceError::InvalidRequest( + "time range end must be before now".to_string(), + )); + } + let earliest_allowed = *end - max_duration; + let start = earliest_allowed; TimeRange { start, end: *end } } }; + if range.start > range.end { + return Err(ServiceError::InvalidRequest( + "time range start must be before end".to_string(), + )); + } + let span = range.end.signed_duration_since(range.start); + if span > max_duration { + return Err(ServiceError::InvalidRequest(format!( + "time range cannot exceed {MAX_TIME_RANGE_DAYS} days" + ))); + } + Ok(range) + } +} + ``` `[To ensure code accuracy, apply this suggestion manually]` <details><summary>Suggestion importance[1-10]: 6</summary> __ Why: The suggestion correctly identifies an edge case in the new time range parsing logic that could lead to confusing error messages and improves the code by handling it explicitly. </details></details></td><td align=center>Low </td></tr><tr><td rowspan=1>Incremental <sup><a href='https://qodo-merge-docs.qodo.ai/core-abilities/incremental_update/'>[*]</a></sup></td> <td> <details><summary>Fix NULL-safe NOT IN semantics</summary> ___ **Modify the SQL generation for <code>NotEq</code>, <code>NotLike</code>, and <code>NotIn</code> text filters to be <br>NULL-safe by including an <code>IS NULL</code> check, preventing rows with <code>NULL</code> values from <br>being unintentionally filtered out.** [rust/srql/src/query/trace_summaries.rs [256-303]](https://github.com/carverauto/serviceradar/pull/1950/files#diff-ff98619eb5dc4d75205d1797477e9e2880f42081e8d0c61d6fd1d6e5926b72faR256-R303) ```diff fn add_text_condition( clauses: &mut Vec<String>, binds: &mut Vec<SqlBindValue>, column: &str, filter: &Filter, ) -> Result<()> { match filter.op { FilterOp::Eq => { let value = filter.value.as_scalar()?.to_string(); clauses.push(format!("{column} = ?")); binds.push(SqlBindValue::Text(value)); } FilterOp::NotEq => { let value = filter.value.as_scalar()?.to_string(); - clauses.push(format!("{column} <> ?")); + clauses.push(format!("({column} IS NULL OR {column} <> ?)")); binds.push(SqlBindValue::Text(value)); } FilterOp::Like => { let value = filter.value.as_scalar()?.to_string(); clauses.push(format!("{column} ILIKE ?")); binds.push(SqlBindValue::Text(value)); } FilterOp::NotLike => { let value = filter.value.as_scalar()?.to_string(); - clauses.push(format!("{column} NOT ILIKE ?")); + clauses.push(format!("({column} IS NULL OR {column} NOT ILIKE ?)")); binds.push(SqlBindValue::Text(value)); } FilterOp::In => { let values = filter.value.as_list()?.to_vec(); if values.is_empty() { clauses.push("1=0".to_string()); return Ok(()); } clauses.push(format!("{column} = ANY(?)")); binds.push(SqlBindValue::TextArray(values)); } FilterOp::NotIn => { let values = filter.value.as_list()?.to_vec(); if values.is_empty() { return Ok(()); } - clauses.push(format!("{column} <> ALL(?)")); + clauses.push(format!("({column} IS NULL OR {column} <> ALL(?))")); binds.push(SqlBindValue::TextArray(values)); } } Ok(()) } ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=5 --> <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly identifies that `NOT IN` and `<>` operators in SQL can produce unexpected results with `NULL` values, and proposes a valid fix to make these comparisons NULL-safe, which improves query correctness. </details></details></td><td align=center>Medium </td></tr><tr><td rowspan=1>General</td> <td> <details><summary>Remove conflicting duplicate field</summary> ___ **Remove the redundant <code>raw_data</code> field from the JSON output of <br><code>TraceSpanRow::into_json</code>. The <code>attributes</code> field already provides the same data.** [rust/srql/src/models.rs [205-230]](https://github.com/carverauto/serviceradar/pull/1950/files#diff-57c82b01f92e9bd40063d0b0178c12d452771ac133f2121fb0ac008b167da367R205-R230) ```diff impl TraceSpanRow { pub fn into_json(self) -> serde_json::Value { serde_json::json!({ "timestamp": self.timestamp, "trace_id": self.trace_id, "span_id": self.span_id, "parent_span_id": self.parent_span_id, "name": self.name, "kind": self.kind, "start_time_unix_nano": self.start_time_unix_nano, "end_time_unix_nano": self.end_time_unix_nano, "service_name": self.service_name, "service_version": self.service_version, "service_instance": self.service_instance, "scope_name": self.scope_name, "scope_version": self.scope_version, "status_code": self.status_code, "status_message": self.status_message, - "attributes": self.attributes.clone(), + "attributes": self.attributes, "resource_attributes": self.resource_attributes, "events": self.events, - "links": self.links, - "raw_data": self.attributes.unwrap_or_default(), + "links": self.links }) } } ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=6 --> <details><summary>Suggestion importance[1-10]: 5</summary> __ Why: The suggestion correctly points out that the `raw_data` field is a redundant and potentially confusing duplication of the `attributes` field, improving the clarity and consistency of the JSON output. </details></details></td><td align=center>Low </td></tr> <tr><td align="center" colspan="2"> - [ ] More <!-- /improve --more_suggestions=true --> </td><td></td></tr></tbody></table> ___ #### Previous suggestions <details><summary>✅ Suggestions up to commit 5f5004f</summary> <br><table><thead><tr><td><strong>Category</strong></td><td align=left><strong>Suggestion&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </strong></td><td align=center><strong>Impact</strong></td></tr><tbody><tr><td rowspan=1>High-level</td> <td> <details><summary>Unify query generation using Diesel</summary> ___ **Standardize query generation by using the Diesel query builder for all queries, <br>including statistics. This will eliminate inconsistent raw SQL string <br>manipulation and duplicated logic across different query handlers.** ### Examples: <details> <summary> <a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-ff98619eb5dc4d75205d1797477e9e2880f42081e8d0c61d6fd1d6e5926b72faR130-R202">rust/srql/src/query/trace_summaries.rs [130-202]</a> </summary> ```rust fn build_summary_query(plan: &QueryPlan) -> Result<TraceSummarySql> { let mut sql = String::from("WITH trace_summaries AS (\n"); sql.push_str( " SELECT\n trace_id,\n max(timestamp) AS timestamp,\n max(span_id) FILTER (WHERE ", ); sql.push_str(ROOT_PREDICATE); sql.push_str(") AS root_span_id,\n max(name) FILTER (WHERE "); sql.push_str(ROOT_PREDICATE); sql.push_str(") AS root_span_name,\n max(service_name) FILTER (WHERE "); sql.push_str(ROOT_PREDICATE); ... (clipped 63 lines) ``` </details> <details> <summary> <a href="https://github.com/carverauto/serviceradar/pull/1950/files#diff-f4d3b33667f2e79a6ebe4cfff931f93c728d9a81c305ac13586e623850a504dbR122-R166">rust/srql/src/query/logs.rs [122-166]</a> </summary> ```rust fn build_stats_query(plan: &QueryPlan) -> Result<Option<LogsStatsSql>> { let stats_raw = match plan.stats.as_ref() { Some(raw) if !raw.trim().is_empty() => raw.trim(), _ => return Ok(None), }; let expressions = parse_stats_expressions(stats_raw)?; if expressions.is_empty() { return Err(ServiceError::InvalidRequest( "stats expression required for logs queries".into(), ... (clipped 35 lines) ``` </details> ### Solution Walkthrough: #### Before: ```rust // In rust/srql/src/query/trace_summaries.rs fn build_summary_query(plan: &QueryPlan) -> Result<TraceSummarySql> { let mut sql = String::from("WITH trace_summaries AS (...)"); // ... manual SQL string building ... if plan.stats.is_some() { sql.push_str("SELECT jsonb_build_object(...) AS payload FROM trace_summaries"); // ... more manual SQL string building ... } // ... let mut binds = Vec::new(); // Manual bind value collection // ... Ok(TraceSummarySql { sql, binds, ... }) } fn rewrite_placeholders(sql: &str) -> String { /* ... converts ? to $1, $2 ... */ } ``` #### After: ```rust // In a shared query utility module pub trait StatsQueryBuilder { fn build_stats_query<'a>(plan: &'a QueryPlan) -> Result<BoxedSqlQuery<'a, ...>>; } // In rust/srql/src/query/trace_summaries.rs impl StatsQueryBuilder for TraceSummaries { fn build_stats_query<'a>(plan: &'a QueryPlan) -> Result<BoxedSqlQuery<'a, ...>> { let mut query = sql_query("SELECT jsonb_build_object(...) FROM (...) AS trace_summaries") .into_boxed(); // Use Diesel's .bind() for all parameters for expr in plan.stats_expressions { query = query.bind::<Text, _>(expr.alias); // ... bind other parts of the expression } // ... Ok(query) } } ``` <details><summary>Suggestion importance[1-10]: 9</summary> __ Why: The suggestion correctly identifies a major design flaw where multiple query handlers use inconsistent and duplicated raw SQL string building, which increases maintenance overhead and risk, making it a critical issue to address. </details></details></td><td align=center>High </td></tr><tr><td rowspan=2>Possible issue</td> <td> <details><summary>Fix incorrect multi-byte character encoding</summary> ___ **Refactor the <code>urlencode</code> function to correctly handle multi-byte characters by <br>iterating over bytes instead of characters, which prevents incorrect encoding.** [docker/compose/entrypoint-srql.sh [20-43]](https://github.com/carverauto/serviceradar/pull/1950/files#diff-16cf075815de9081558fd4ee53d1138c488cd8756b43259be4a82c86dac514acR20-R43) ```diff urlencode() { # Percent-encode arbitrary bytes (safe for UTF-8 secrets) - local input="$1" - local out="" - local i=1 - local len char hex - LC_ALL=C - len=$(printf '%s' "$input" | wc -c | tr -d '[:space:]') - while [ "$i" -le "$len" ]; do - char=$(printf '%s' "$input" | cut -c "$i") - case "$char" in - [a-zA-Z0-9.~_-]) - out="${out}${char}" - ;; - *) - hex=$(printf '%s' "$char" | od -An -tx1 | head -n 1 | tr -d ' \n') - hex=$(printf '%s' "$hex" | tr 'a-f' 'A-F') - out="${out}%${hex}" - ;; + local data_hex out byte + data_hex=$(printf '%s' "$1" | od -An -tx1) + out="" + for byte in $data_hex; do + case "$byte" in + # 0-9 + 30|31|32|33|34|35|36|37|38|39) out="${out}$(printf "\\x$byte")" ;; + # A-Z + 41|42|43|44|45|46|47|48|49|4a|4b|4c|4d|4e|4f|50|51|52|53|54|55|56|57|58|59|5a) out="${out}$(printf "\\x$byte")" ;; + # a-z + 61|62|63|64|65|66|67|68|69|6a|6b|6c|6d|6e|6f|70|71|72|73|74|75|76|77|78|79|7a) out="${out}$(printf "\\x$byte")" ;; + # .~_- + 2d|2e|5f|7e) out="${out}$(printf "\\x$byte")" ;; + *) out="${out}%$(printf '%s' "$byte" | tr 'a-f' 'A-F')" ;; esac - i=$((i + 1)) done printf '%s' "$out" } ``` <details><summary>Suggestion importance[1-10]: 9</summary> __ Why: The suggestion correctly identifies a critical bug in the new `urlencode` implementation that breaks encoding for multi-byte characters and provides a robust fix. </details></details></td><td align=center>High </td></tr><tr><td> <details><summary>✅ <s>Correctly handle empty IN filter</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The commit adds clauses.push("1=0".to_string()); when the IN filter values are empty, implementing the suggested behavior. code diff: ```diff @@ -282,6 +282,7 @@ FilterOp::In => { let values = filter.value.as_list()?.to_vec(); if values.is_empty() { + clauses.push("1=0".to_string()); return Ok(()); } clauses.push(format!("{column} = ANY(?)")); ``` </details> ___ **Handle an empty <code>IN</code> filter by adding an always-false condition like <code>1=0</code> to the <br>query, ensuring it correctly returns no results.** [rust/srql/src/query/trace_summaries.rs [282-289]](https://github.com/carverauto/serviceradar/pull/1950/files#diff-ff98619eb5dc4d75205d1797477e9e2880f42081e8d0c61d6fd1d6e5926b72faR282-R289) ```diff fn add_text_condition( clauses: &mut Vec<String>, binds: &mut Vec<SqlBindValue>, column: &str, filter: &Filter, ) -> Result<()> { match filter.op { ... FilterOp::In => { let values = filter.value.as_list()?.to_vec(); if values.is_empty() { + clauses.push("1=0".to_string()); return Ok(()); } clauses.push(format!("{column} = ANY(?)")); binds.push(SqlBindValue::TextArray(values)); } ... } Ok(()) } ``` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly identifies a bug where an empty `IN` clause is ignored, leading to incorrect query results, and proposes a valid fix. </details></details></td><td align=center>Medium </td></tr><tr><td rowspan=1>Security</td> <td> <details><summary>✅ <s>Avoid hardcoding sensitive API keys</s></summary> ___ <details><summary><b>Suggestion Impact:</b></summary>The commit removed the hardcoded "x-api-key" from headers and added logic in the setup script to inject the API key dynamically into logging.otel.headers from a secret-derived variable, avoiding hardcoding in the configmap. code diff: ```diff @@ -238,9 +238,7 @@ "service_name": "serviceradar-core", "batch_timeout": "5s", "insecure": false, - "headers": { - "x-api-key": "your-collector-api-key" - }, + "headers": {}, "tls": { "cert_file": "/etc/serviceradar/certs/core.pem", "key_file": "/etc/serviceradar/certs/core-key.pem", @@ -338,9 +336,7 @@ "service_name": "serviceradar-poller", "batch_timeout": "5s", "insecure": false, - "headers": { - "x-api-key": "your-collector-api-key" - }, + "headers": {}, "tls": { "cert_file": "/etc/serviceradar/certs/poller.pem", "key_file": "/etc/serviceradar/certs/poller-key.pem", @@ -954,11 +950,15 @@ | .cnpg.cert_dir = $certdir | .auth.jwt_secret = $jwt | .auth.jwt_expiration = "24h" - | .auth.local_users.admin = $bcrypt - | .api_key = $api_key - | (if .srql != null then (.srql.api_key = $api_key) else . end) - | .edge_onboarding.encryption_key = $edge_key' \ - "$CONFIG_PATH" > "$TMP_CONFIG" + | .auth.local_users.admin = $bcrypt + | .api_key = $api_key + | (if .srql != null then (.srql.api_key = $api_key) else . end) + | (if (.logging? // null) != null and (.logging.otel? // null) != null + then (.logging.otel.headers = ((.logging.otel.headers // {}) + {"x-api-key": $api_key})) + else . + end) + | .edge_onboarding.encryption_key = $edge_key' \ + "$CONFIG_PATH" > "$TMP_CONFIG" ``` </details> ___ **Replace the hardcoded OpenTelemetry API key in <code>configmap.yaml</code> with a reference <br>to a Kubernetes secret to avoid security risks.** [k8s/demo/base/configmap.yaml [241-243]](https://github.com/carverauto/serviceradar/pull/1950/files#diff-f4548beaa0a3a01a46971c82c5647a0f3f49eb38d66dd939d06d19018173fcd6R241-R243) ```diff "headers": { - "x-api-key": "your-collector-api-key" + "x-api-key": { + "valueFrom": { + "secretKeyRef": { + "name": "otel-collector-secret", + "key": "api-key" + } + } + } }, ``` `[Suggestion processed]` <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: The suggestion addresses a security risk by advising against a hardcoded placeholder API key and correctly recommends using a Kubernetes secret instead. </details></details></td><td align=center>Medium </td></tr> <tr><td align="center" colspan="2"> <!-- /improve_multi --more_suggestions=true --> </td><td></td></tr></tbody></table> </details>
Sign in to join this conversation.
No reviewers
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
carverauto/serviceradar!2418
No description provided.