Update/otel schema log fix #2703

Merged
mfreeman451 merged 2 commits from refs/pull/2703/head into staging 2026-01-18 20:29:18 +00:00
mfreeman451 commented 2026-01-18 20:27:02 +00:00 (Migrated from github.com)
Owner

Imported from GitHub pull request.

Original GitHub pull request: #2358
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/pull/2358
Original created: 2026-01-18T20:27:02Z
Original updated: 2026-01-18T20:29:20Z
Original head: carverauto/serviceradar:update/otel-schema-log-fix
Original base: staging
Original merged: 2026-01-18T20:29:18Z by @mfreeman451

User description

IMPORTANT: Please sign the Developer Certificate of Origin

Thank you for your contribution to ServiceRadar. Please note, when contributing, the developer must include
a DCO sign-off statement indicating the DCO acceptance in one commit message. Here
is an example DCO Signed-off-by line in a commit message:

Signed-off-by: J. Doe <j.doe@domain.com>

Describe your changes

Code checklist before requesting a review

  • I have signed the DCO?
  • The build completes without errors?
  • All tests are passing when running make test?

PR Type

Enhancement


Description

  • Add four new OpenTelemetry log fields: observed_timestamp, trace_flags, event_name, scope_attributes

  • Update log processors across Elixir, Go, and Rust to parse and persist new OTEL schema fields

  • Enhance log UI to augment nested attributes and extract resource/scope metadata from attributes

  • Add database migration and update schema definitions across all services


Diagram Walkthrough

flowchart LR
  A["OTEL Log Input"] -->|Parse new fields| B["Log Processors"]
  B -->|Elixir/Go/Rust| C["Database Schema"]
  C -->|Store fields| D["Logs Table"]
  D -->|Query via SRQL| E["Log UI"]
  E -->|Augment attributes| F["Display OTEL Fields"]

File Walkthrough

Relevant files
Enhancement
10 files
logs.ex
Add OTEL field parsing functions to log processor               
+48/-0   
log.ex
Add new OTEL attributes to Log resource schema                     
+24/-0   
show.ex
Augment log data with nested attribute extraction               
+107/-2 
json_logs.go
Add parsing for new OTEL log fields in JSON                           
+161/-47
processor.go
Update log row creation to include new OTEL fields             
+57/-43 
cnpg_observability.go
Update SQL insert statement with new columns                         
+10/-1   
otel.go
Add new fields to OTELLogRow struct                                           
+4/-0     
otel_logs.rs
Extract scope attributes and trace flags in Rust                 
+9/-0     
models.rs
Add new fields to LogRow struct and JSON serialization     
+8/-0     
schema.rs
Update Diesel schema definition with new columns                 
+4/-0     
Configuration changes
1 files
20260118120000_add_otel_log_fields.exs
Create migration to add new log table columns                       
+27/-0   
Tests
3 files
show_test.exs
Add test for nested attributes augmentation                           
+47/-0   
test_helper.exs
Update test database schema with new columns                         
+4/-0     
schema.sql
Update test fixture schema with new columns                           
+4/-0     
Formatting
2 files
devices.rs
Minor formatting improvements in query builder                     
+9/-10   
mod.rs
Minor formatting improvements in test code                             
+13/-4   
Documentation
1 files
tasks.md
Mark SRQL paths update task as complete                                   
+1/-1     

Imported from GitHub pull request. Original GitHub pull request: #2358 Original author: @mfreeman451 Original URL: https://github.com/carverauto/serviceradar/pull/2358 Original created: 2026-01-18T20:27:02Z Original updated: 2026-01-18T20:29:20Z Original head: carverauto/serviceradar:update/otel-schema-log-fix Original base: staging Original merged: 2026-01-18T20:29:18Z by @mfreeman451 --- ### **User description** ## IMPORTANT: Please sign the Developer Certificate of Origin Thank you for your contribution to ServiceRadar. Please note, when contributing, the developer must include a [DCO sign-off statement]( https://developercertificate.org/) indicating the DCO acceptance in one commit message. Here is an example DCO Signed-off-by line in a commit message: ``` Signed-off-by: J. Doe <j.doe@domain.com> ``` ## Describe your changes ## Issue ticket number and link ## Code checklist before requesting a review - [ ] I have signed the DCO? - [ ] The build completes without errors? - [ ] All tests are passing when running make test? ___ ### **PR Type** Enhancement ___ ### **Description** - Add four new OpenTelemetry log fields: `observed_timestamp`, `trace_flags`, `event_name`, `scope_attributes` - Update log processors across Elixir, Go, and Rust to parse and persist new OTEL schema fields - Enhance log UI to augment nested attributes and extract resource/scope metadata from attributes - Add database migration and update schema definitions across all services ___ ### Diagram Walkthrough ```mermaid flowchart LR A["OTEL Log Input"] -->|Parse new fields| B["Log Processors"] B -->|Elixir/Go/Rust| C["Database Schema"] C -->|Store fields| D["Logs Table"] D -->|Query via SRQL| E["Log UI"] E -->|Augment attributes| F["Display OTEL Fields"] ``` <details><summary><h3>File Walkthrough</h3></summary> <table><thead><tr><th></th><th align="left">Relevant files</th></tr></thead><tbody><tr><td><strong>Enhancement</strong></td><td><details><summary>10 files</summary><table> <tr> <td><strong>logs.ex</strong><dd><code>Add OTEL field parsing functions to log processor</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-60c56675b539f19f1cce7702a571d883df17ce5c20a32bdc95afcd329b0b0efd">+48/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>log.ex</strong><dd><code>Add new OTEL attributes to Log resource schema</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-0ce3a791fe2326e9b707407f72e5ad699093e4743879ce279804a478178ae119">+24/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>show.ex</strong><dd><code>Augment log data with nested attribute extraction</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-4f9769353c55928a0d382cd7510379444445aea116e1ecdf7b8eb892d249ff26">+107/-2</a>&nbsp; </td> </tr> <tr> <td><strong>json_logs.go</strong><dd><code>Add parsing for new OTEL log fields in JSON</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-c16044ec608122e3cb559e6e21779d3b25753581b58ff01da701cec5d19a2d42">+161/-47</a></td> </tr> <tr> <td><strong>processor.go</strong><dd><code>Update log row creation to include new OTEL fields</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-c55d73b621975e3797271d69fc43b78fa44eb184437392f9e40e18d4568589a8">+57/-43</a>&nbsp; </td> </tr> <tr> <td><strong>cnpg_observability.go</strong><dd><code>Update SQL insert statement with new columns</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-abaae95adb6fa6d0241553f6d1b39ecff3dd6159db72babbe39dfb3cd231cd3d">+10/-1</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>otel.go</strong><dd><code>Add new fields to OTELLogRow struct</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-1e3b11419ec190436bdb0f61dc96c17cb51ffad29964e6d16f4621b74f2c36cc">+4/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>otel_logs.rs</strong><dd><code>Extract scope attributes and trace flags in Rust</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-4542ace311c2de0c5f9389169b9b4b3429e972dde67ea1c6efb9dff04ec1c725">+9/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>models.rs</strong><dd><code>Add new fields to LogRow struct and JSON serialization</code>&nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-57c82b01f92e9bd40063d0b0178c12d452771ac133f2121fb0ac008b167da367">+8/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>schema.rs</strong><dd><code>Update Diesel schema definition with new columns</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-2fe4906f42b52f96b7bc2d3431885b996b6701cfb88416905ae130b472d536ea">+4/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Configuration changes</strong></td><td><details><summary>1 files</summary><table> <tr> <td><strong>20260118120000_add_otel_log_fields.exs</strong><dd><code>Create migration to add new log table columns</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-b284c8d6e783cb9f453dc1a180eba3bf7e96edd3ef53c64a67db8d733f60350d">+27/-0</a>&nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Tests</strong></td><td><details><summary>3 files</summary><table> <tr> <td><strong>show_test.exs</strong><dd><code>Add test for nested attributes augmentation</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-2d62472b3e212a4da643a2f66d2a07d358795602e680f31249a162b71fa0ea3b">+47/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>test_helper.exs</strong><dd><code>Update test database schema with new columns</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-eff8b15faf59dfd4a7fc1c6c3989ab34c8b8c85f50a4d8fc999df991ab367cea">+4/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>schema.sql</strong><dd><code>Update test fixture schema with new columns</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-ce14e9d31fb318e6b693bef44d6097d9ab1bc13d028ee32e603648aa9370f044">+4/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Formatting</strong></td><td><details><summary>2 files</summary><table> <tr> <td><strong>devices.rs</strong><dd><code>Minor formatting improvements in query builder</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-3202f22fff6863ed7848a129c49e2323322462b379d896d3fca2e59aa6f7b4c5">+9/-10</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>mod.rs</strong><dd><code>Minor formatting improvements in test code</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-393e3fa3d0e41741834cd7cd398a06111ab7b141ae6caca7a5dcc0e036172491">+13/-4</a>&nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Documentation</strong></td><td><details><summary>1 files</summary><table> <tr> <td><strong>tasks.md</strong><dd><code>Mark SRQL paths update task as complete</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-d39d8463ab316bb79b9898a5225d03269d05bc29f59d8fc6e86bb45ddb56258d">+1/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></details></td></tr></tbody></table> </details> ___
qodo-code-review[bot] commented 2026-01-18 20:27:50 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/2358#issuecomment-3765711767
Original created: 2026-01-18T20:27:50Z

ⓘ Your approaching your monthly quota for Qodo. Upgrade your plan

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
🟢
No security concerns identified No security vulnerabilities detected by AI analysis. Human verification advised for critical code.
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

🔴
Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Trace flag truncation: parseTraceFlags converts a potentially non-integer/large numeric value to int32 without
validating bounds or integer-ness, which can silently truncate/alter the trace flags
value.

Referred Code
func parseTraceFlags(entry map[string]interface{}) *int32 {
	if value, ok := firstValue(entry, "trace_flags", "traceFlags", "flags"); ok {
		if numeric, ok := parseNumeric(value); ok {
			traceFlags := int32(numeric)
			return &traceFlags
		}
	}

	return nil
}

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status:
Audit scope unclear: The PR primarily extends log schema/ingestion but does not show whether critical
user/security actions are consistently audited with user ID, timestamp, action, and
outcome across the system.

Referred Code
attributes = FieldParser.encode_jsonb(json["attributes"]) || %{}
attributes = attach_ingest_metadata(attributes, metadata)
resource_attributes = normalize_resource_attributes(json)
{scope_name, scope_version} = parse_scope_fields(json)

%{
  id: log_id,
  timestamp: parse_timestamp(json),
  observed_timestamp: parse_observed_timestamp(json),
  trace_id: FieldParser.get_field(json, "trace_id", "traceId"),
  span_id: FieldParser.get_field(json, "span_id", "spanId"),
  trace_flags: parse_trace_flags(json),
  severity_text: FieldParser.get_field(json, "severity_text", "severityText") || json["level"],
  severity_number:
    json
    |> FieldParser.get_field("severity_number", "severityNumber")
    |> FieldParser.safe_bigint(),
  body: extract_body(json),
  event_name: FieldParser.get_field(json, "event_name", "eventName"),
  service_name: service_field(json, resource_attributes, "service_name", "serviceName", "service.name"),
  service_version:


 ... (clipped 17 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status:
Sensitive data exposure: The UI now unwraps and renders nested attributes/resource_attributes (and derives
scope/service fields), which may surface secrets/PII if those are present in incoming log
attributes.

Referred Code
defp augment_log(%{} = log) do
  attributes = parse_attributes(Map.get(log, "attributes")) || %{}
  resource_attributes =
    parse_attributes(Map.get(log, "resource_attributes")) ||
      extract_resource_from_attributes(attributes)

  {scope_name, scope_version} =
    extract_scope_from_attributes(
      Map.get(log, "scope_name"),
      Map.get(log, "scope_version"),
      attributes
    )

  attributes =
    attributes
    |> drop_attribute_keys(["resource", "resource_attributes", "resourceAttributes", "scope"])
    |> unwrap_nested_attributes()

  log
  |> Map.put("attributes", if(map_size(attributes) == 0, do: nil, else: attributes))
  |> Map.put("resource_attributes", resource_attributes || Map.get(log, "resource_attributes"))


 ... (clipped 4 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
- Requires Further Human Verification
🏷️ - Compliance label
Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/2358#issuecomment-3765711767 Original created: 2026-01-18T20:27:50Z --- <pre>ⓘ Your approaching your monthly quota for Qodo. <a href="https://www.qodo.ai/pricing">Upgrade your plan</a></pre> ## PR Compliance Guide 🔍 <!-- https://github.com/carverauto/serviceradar/commit/c5d62d75c92ef9404cde62c83e586ad499c93055 --> Below is a summary of compliance checks for this PR:<br> <table><tbody><tr><td colspan='2'><strong>Security Compliance</strong></td></tr> <tr><td>🟢</td><td><details><summary><strong>No security concerns identified</strong></summary> No security vulnerabilities detected by AI analysis. Human verification advised for critical code. </details></td></tr> <tr><td colspan='2'><strong>Ticket Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary>🎫 <strong>No ticket provided </strong></summary> - [ ] Create ticket/issue <!-- /create_ticket --create_ticket=true --> </details></td></tr> <tr><td colspan='2'><strong>Codebase Duplication Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary><strong>Codebase context is not defined </strong></summary> Follow the <a href='https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/'>guide</a> to enable codebase context checks. </details></td></tr> <tr><td colspan='2'><strong>Custom Compliance</strong></td></tr> <tr><td rowspan=3>🟢</td><td> <details><summary><strong>Generic: Meaningful Naming and Self-Documenting Code</strong></summary><br> **Objective:** Ensure all identifiers clearly express their purpose and intent, making code <br>self-documenting<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td> <details><summary><strong>Generic: Secure Error Handling</strong></summary><br> **Objective:** To prevent the leakage of sensitive system information through error messages while <br>providing sufficient detail for internal debugging.<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td> <details><summary><strong>Generic: Security-First Input Validation and Data Handling</strong></summary><br> **Objective:** Ensure all data inputs are validated, sanitized, and handled securely to prevent <br>vulnerabilities<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td rowspan=1>🔴</td> <td><details> <summary><strong>Generic: Robust Error Handling and Edge Case Management</strong></summary><br> **Objective:** Ensure comprehensive error handling that provides meaningful context and graceful <br>degradation<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/2358/files#diff-c16044ec608122e3cb559e6e21779d3b25753581b58ff01da701cec5d19a2d42R475-R484'><strong>Trace flag truncation</strong></a>: <code>parseTraceFlags</code> converts a potentially non-integer/large numeric value to <code>int32</code> without <br>validating bounds or integer-ness, which can silently truncate/alter the trace flags <br>value.<br> <details open><summary>Referred Code</summary> ```go func parseTraceFlags(entry map[string]interface{}) *int32 { if value, ok := firstValue(entry, "trace_flags", "traceFlags", "flags"); ok { if numeric, ok := parseNumeric(value); ok { traceFlags := int32(numeric) return &traceFlags } } return nil } ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td rowspan=2>⚪</td> <td><details> <summary><strong>Generic: Comprehensive Audit Trails</strong></summary><br> **Objective:** To create a detailed and reliable record of critical system actions for security analysis <br>and compliance.<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/2358/files#diff-60c56675b539f19f1cce7702a571d883df17ce5c20a32bdc95afcd329b0b0efdR84-R121'><strong>Audit scope unclear</strong></a>: The PR primarily extends log schema/ingestion but does not show whether critical <br>user/security actions are consistently audited with user ID, timestamp, action, and <br>outcome across the system.<br> <details open><summary>Referred Code</summary> ```elixir attributes = FieldParser.encode_jsonb(json["attributes"]) || %{} attributes = attach_ingest_metadata(attributes, metadata) resource_attributes = normalize_resource_attributes(json) {scope_name, scope_version} = parse_scope_fields(json) %{ id: log_id, timestamp: parse_timestamp(json), observed_timestamp: parse_observed_timestamp(json), trace_id: FieldParser.get_field(json, "trace_id", "traceId"), span_id: FieldParser.get_field(json, "span_id", "spanId"), trace_flags: parse_trace_flags(json), severity_text: FieldParser.get_field(json, "severity_text", "severityText") || json["level"], severity_number: json |> FieldParser.get_field("severity_number", "severityNumber") |> FieldParser.safe_bigint(), body: extract_body(json), event_name: FieldParser.get_field(json, "event_name", "eventName"), service_name: service_field(json, resource_attributes, "service_name", "serviceName", "service.name"), service_version: ... (clipped 17 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td><details> <summary><strong>Generic: Secure Logging Practices</strong></summary><br> **Objective:** To ensure logs are useful for debugging and auditing without exposing sensitive <br>information like PII, PHI, or cardholder data.<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/2358/files#diff-4f9769353c55928a0d382cd7510379444445aea116e1ecdf7b8eb892d249ff26R138-R162'><strong>Sensitive data exposure</strong></a>: The UI now unwraps and renders nested <code>attributes</code>/<code>resource_attributes</code> (and derives <br>scope/service fields), which may surface secrets/PII if those are present in incoming log <br>attributes.<br> <details open><summary>Referred Code</summary> ```elixir defp augment_log(%{} = log) do attributes = parse_attributes(Map.get(log, "attributes")) || %{} resource_attributes = parse_attributes(Map.get(log, "resource_attributes")) || extract_resource_from_attributes(attributes) {scope_name, scope_version} = extract_scope_from_attributes( Map.get(log, "scope_name"), Map.get(log, "scope_version"), attributes ) attributes = attributes |> drop_attribute_keys(["resource", "resource_attributes", "resourceAttributes", "scope"]) |> unwrap_nested_attributes() log |> Map.put("attributes", if(map_size(attributes) == 0, do: nil, else: attributes)) |> Map.put("resource_attributes", resource_attributes || Map.get(log, "resource_attributes")) ... (clipped 4 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td align="center" colspan="2"> <!-- placeholder --> <!-- /compliance --update_compliance=true --> </td></tr></tbody></table> <details><summary>Compliance status legend</summary> 🟢 - Fully Compliant<br> 🟡 - Partial Compliant<br> 🔴 - Not Compliant<br> ⚪ - Requires Further Human Verification<br> 🏷️ - Compliance label<br> </details>
qodo-code-review[bot] commented 2026-01-18 20:29:10 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/2358#issuecomment-3765712902
Original created: 2026-01-18T20:29:10Z

ⓘ Your approaching your monthly quota for Qodo. Upgrade your plan

PR Code Suggestions

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Move data normalization to ingestion

The suggestion is to move the data normalization logic, currently in the UI
layer (show.ex), to the data ingestion processors. This ensures data is
consistently structured upon storage, improving query performance and data
reliability for all consumers.

Examples:

web-ng/lib/serviceradar_web_ng_web/live/log_live/show.ex [138-162]
  defp augment_log(%{} = log) do
    attributes = parse_attributes(Map.get(log, "attributes")) || %{}
    resource_attributes =
      parse_attributes(Map.get(log, "resource_attributes")) ||
        extract_resource_from_attributes(attributes)

    {scope_name, scope_version} =
      extract_scope_from_attributes(
        Map.get(log, "scope_name"),
        Map.get(log, "scope_version"),

 ... (clipped 15 lines)
pkg/consumers/db-event-writer/json_logs.go [140-160]
	if len(resourceAttribs) == 0 {
		resourceValue, updated := popAttribute(attributesMap, "resource_attributes", "resourceAttributes", "resource")
		attributesMap = updated
		if parsed := parseAttributeValue(resourceValue); len(parsed) > 0 {
			resourceAttribs = parsed
		}
	}

	scopeName, scopeVersion = applyScopeFallbacks(entry, scopeName, scopeVersion)
	if scopeName == "" || scopeVersion == "" {

 ... (clipped 11 lines)

Solution Walkthrough:

Before:

# web-ng/.../show.ex (UI Layer)
def handle_params(..., socket) do
  log = fetch_log_from_db(log_id)
  augmented_log = augment_log(log) # Normalization happens here
  ...
end

defp augment_log(log) do
  # Extracts resource from attributes
  # Extracts scope from attributes
  # Cleans up attributes map
  # Populates service fields from resource
  ...
end

After:

# pkg/consumers/db-event-writer/json_logs.go (Ingestion Layer)
func buildJSONLogRow(entry) {
  // ... existing normalization ...
  // Add logic to unwrap nested attributes, e.g. attributes["attributes"]
  // Ensure all normalization from show.ex is present here
  ...
  return models.OTELLogRow{ ... } // Return fully normalized row
}

# web-ng/.../show.ex (UI Layer)
def handle_params(..., socket) do
  log = fetch_log_from_db(log_id) # Log is already normalized
  # `augment_log` function is removed
  ...
end

Suggestion importance[1-10]: 9

__

Why: This is a critical architectural suggestion; placing normalization logic in the UI is incorrect and leads to data inconsistency, while the PR already implements similar logic in some ingestion paths, indicating this was an oversight.

High
Possible issue
Correctly handle epoch timestamps

Remove the parsed.IsZero() check in parseObservedTimestamp to prevent
incorrectly discarding valid Unix epoch timestamps.

pkg/consumers/db-event-writer/json_logs.go [453-473]

 func parseObservedTimestamp(entry map[string]interface{}) *time.Time {
 	value, ok := firstValue(
 		entry,
 		"observed_timestamp",
 		"observedTimestamp",
 		"observed_time_unix_nano",
 		"observedTimeUnixNano",
 	)
 	if !ok {
 		return nil
 	}
 
 	if parsed, ok := parseFlexibleTime(value); ok {
-		if parsed.IsZero() {
-			return nil
-		}
 		return &parsed
 	}
 
 	return nil
 }
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a bug where a valid Unix epoch timestamp would be incorrectly discarded due to the IsZero() check, leading to data loss.

Medium
Ensure resource attributes are correctly propagated

Fix a bug in augment_log/1 by correctly assigning the extracted nested resource
attributes to the resource_attributes variable, ensuring service details are
properly populated.

web-ng/lib/serviceradar_web_ng_web/live/log_live/show.ex [138-162]

 defp augment_log(%{} = log) do
   attributes = parse_attributes(Map.get(log, "attributes")) || %{}
+  
   resource_attributes =
     parse_attributes(Map.get(log, "resource_attributes")) ||
       extract_resource_from_attributes(attributes)
 
   {scope_name, scope_version} =
     extract_scope_from_attributes(
       Map.get(log, "scope_name"),
       Map.get(log, "scope_version"),
       attributes
     )
 
   attributes =
     attributes
     |> drop_attribute_keys(["resource", "resource_attributes", "resourceAttributes", "scope"])
     |> unwrap_nested_attributes()
 
   log
   |> Map.put("attributes", if(map_size(attributes) == 0, do: nil, else: attributes))
-  |> Map.put("resource_attributes", resource_attributes || Map.get(log, "resource_attributes"))
+  |> Map.put("resource_attributes", resource_attributes)
   |> Map.put("scope_name", scope_name)
   |> Map.put("scope_version", scope_version)
   |> put_service_from_resource(resource_attributes)
 end
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: This suggestion correctly identifies a bug where nested resource_attributes are not propagated correctly, causing put_service_from_resource/2 to fail. The fix is accurate and necessary.

Medium
Fix incorrect scope attributes parsing logic

Refactor parse_scope_attributes to correctly handle the fallback to
json["scope"] after attempting to encode scope_attributes, preventing incorrect
parsing when scope is a string.

elixir/serviceradar_core/lib/serviceradar/event_writer/processors/logs.ex [135-147]

 defp parse_scope_attributes(json) when is_map(json) do
   json
   |> FieldParser.get_field("scope_attributes", "scopeAttributes")
-  |> case do
-    nil -> json["scope"]
-    value -> value
-  end
   |> FieldParser.encode_jsonb()
   |> case do
-    nil -> %{}
-    value -> value
+    nil ->
+      json
+      |> Map.get("scope")
+      |> FieldParser.encode_jsonb()
+      |> case do
+        nil -> %{}
+        value -> value
+      end
+
+    value ->
+      value
   end
 end
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies a logic flaw where falling back to json["scope"] might result in an empty map if scope is a string. The proposed fix correctly handles this edge case.

Medium
Preserve epoch for zero timestamp

In safeTimeFromUnixNano, return the Unix epoch for a zero timestamp instead of
time.Now() to maintain consistent semantics for unset time fields.

pkg/consumers/db-event-writer/processor.go [166-173]

 func safeTimeFromUnixNano(value uint64) time.Time {
     if value == 0 {
-        return time.Now().UTC()
+        return time.Unix(0, 0).UTC()
     }
     if value <= uint64(maxInt64) {
         return time.Unix(0, int64(value))
     }
     // overflow handling...
 }

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly points out that using time.Now() for a zero timestamp is incorrect. Returning the Unix epoch is a more sensible default and aligns with how ObservedTimestamp is handled.

Medium
General
Unify observed timestamp key

In otel_logs.rs, rename the observed_time_unix_nano JSON field to
observed_timestamp to ensure consistency across services.

cmd/consumers/zen/src/otel_logs.rs [75-79]

 let mut log_json = json!({
     "timestamp": timestamp,
-    "observed_time_unix_nano": log_record.observed_time_unix_nano,
+    "observed_timestamp": log_record.observed_time_unix_nano,
     // ...
 });

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 7

__

Why: This suggestion correctly proposes standardizing the observed_timestamp field name for consistency with other services in the PR, which improves maintainability and reduces parsing complexity downstream.

Medium
  • More
Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/2358#issuecomment-3765712902 Original created: 2026-01-18T20:29:10Z --- <pre>ⓘ Your approaching your monthly quota for Qodo. <a href="https://www.qodo.ai/pricing">Upgrade your plan</a></pre> ## PR Code Suggestions ✨ <!-- c5d62d7 --> Explore these optional code suggestions: <table><thead><tr><td><strong>Category</strong></td><td align=left><strong>Suggestion&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </strong></td><td align=center><strong>Impact</strong></td></tr><tbody><tr><td rowspan=1>High-level</td> <td> <details><summary>Move data normalization to ingestion</summary> ___ **The suggestion is to move the data normalization logic, currently in the UI <br>layer (<code>show.ex</code>), to the data ingestion processors. This ensures data is <br>consistently structured upon storage, improving query performance and data <br>reliability for all consumers.** ### Examples: <details> <summary> <a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-4f9769353c55928a0d382cd7510379444445aea116e1ecdf7b8eb892d249ff26R138-R162">web-ng/lib/serviceradar_web_ng_web/live/log_live/show.ex [138-162]</a> </summary> ```elixir defp augment_log(%{} = log) do attributes = parse_attributes(Map.get(log, "attributes")) || %{} resource_attributes = parse_attributes(Map.get(log, "resource_attributes")) || extract_resource_from_attributes(attributes) {scope_name, scope_version} = extract_scope_from_attributes( Map.get(log, "scope_name"), Map.get(log, "scope_version"), ... (clipped 15 lines) ``` </details> <details> <summary> <a href="https://github.com/carverauto/serviceradar/pull/2358/files#diff-c16044ec608122e3cb559e6e21779d3b25753581b58ff01da701cec5d19a2d42R140-R160">pkg/consumers/db-event-writer/json_logs.go [140-160]</a> </summary> ```go if len(resourceAttribs) == 0 { resourceValue, updated := popAttribute(attributesMap, "resource_attributes", "resourceAttributes", "resource") attributesMap = updated if parsed := parseAttributeValue(resourceValue); len(parsed) > 0 { resourceAttribs = parsed } } scopeName, scopeVersion = applyScopeFallbacks(entry, scopeName, scopeVersion) if scopeName == "" || scopeVersion == "" { ... (clipped 11 lines) ``` </details> ### Solution Walkthrough: #### Before: ```go # web-ng/.../show.ex (UI Layer) def handle_params(..., socket) do log = fetch_log_from_db(log_id) augmented_log = augment_log(log) # Normalization happens here ... end defp augment_log(log) do # Extracts resource from attributes # Extracts scope from attributes # Cleans up attributes map # Populates service fields from resource ... end ``` #### After: ```go # pkg/consumers/db-event-writer/json_logs.go (Ingestion Layer) func buildJSONLogRow(entry) { // ... existing normalization ... // Add logic to unwrap nested attributes, e.g. attributes["attributes"] // Ensure all normalization from show.ex is present here ... return models.OTELLogRow{ ... } // Return fully normalized row } # web-ng/.../show.ex (UI Layer) def handle_params(..., socket) do log = fetch_log_from_db(log_id) # Log is already normalized # `augment_log` function is removed ... end ``` <details><summary>Suggestion importance[1-10]: 9</summary> __ Why: This is a critical architectural suggestion; placing normalization logic in the UI is incorrect and leads to data inconsistency, while the PR already implements similar logic in some ingestion paths, indicating this was an oversight. </details></details></td><td align=center>High </td></tr><tr><td rowspan=4>Possible issue</td> <td> <details><summary>Correctly handle epoch timestamps</summary> ___ **Remove the <code>parsed.IsZero()</code> check in <code>parseObservedTimestamp</code> to prevent <br>incorrectly discarding valid Unix epoch timestamps.** [pkg/consumers/db-event-writer/json_logs.go [453-473]](https://github.com/carverauto/serviceradar/pull/2358/files#diff-c16044ec608122e3cb559e6e21779d3b25753581b58ff01da701cec5d19a2d42R453-R473) ```diff func parseObservedTimestamp(entry map[string]interface{}) *time.Time { value, ok := firstValue( entry, "observed_timestamp", "observedTimestamp", "observed_time_unix_nano", "observedTimeUnixNano", ) if !ok { return nil } if parsed, ok := parseFlexibleTime(value); ok { - if parsed.IsZero() { - return nil - } return &parsed } return nil } ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=1 --> <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: The suggestion correctly identifies a bug where a valid Unix epoch timestamp would be incorrectly discarded due to the `IsZero()` check, leading to data loss. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Ensure resource attributes are correctly propagated</summary> ___ **Fix a bug in <code>augment_log/1</code> by correctly assigning the extracted nested resource <br>attributes to the <code>resource_attributes</code> variable, ensuring service details are <br>properly populated.** [web-ng/lib/serviceradar_web_ng_web/live/log_live/show.ex [138-162]](https://github.com/carverauto/serviceradar/pull/2358/files#diff-4f9769353c55928a0d382cd7510379444445aea116e1ecdf7b8eb892d249ff26R138-R162) ```diff defp augment_log(%{} = log) do attributes = parse_attributes(Map.get(log, "attributes")) || %{} + resource_attributes = parse_attributes(Map.get(log, "resource_attributes")) || extract_resource_from_attributes(attributes) {scope_name, scope_version} = extract_scope_from_attributes( Map.get(log, "scope_name"), Map.get(log, "scope_version"), attributes ) attributes = attributes |> drop_attribute_keys(["resource", "resource_attributes", "resourceAttributes", "scope"]) |> unwrap_nested_attributes() log |> Map.put("attributes", if(map_size(attributes) == 0, do: nil, else: attributes)) - |> Map.put("resource_attributes", resource_attributes || Map.get(log, "resource_attributes")) + |> Map.put("resource_attributes", resource_attributes) |> Map.put("scope_name", scope_name) |> Map.put("scope_version", scope_version) |> put_service_from_resource(resource_attributes) end ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=2 --> <details><summary>Suggestion importance[1-10]: 8</summary> __ Why: This suggestion correctly identifies a bug where nested `resource_attributes` are not propagated correctly, causing `put_service_from_resource/2` to fail. The fix is accurate and necessary. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Fix incorrect scope attributes parsing logic</summary> ___ **Refactor <code>parse_scope_attributes</code> to correctly handle the fallback to <br><code>json["scope"]</code> after attempting to encode <code>scope_attributes</code>, preventing incorrect <br>parsing when <code>scope</code> is a string.** [elixir/serviceradar_core/lib/serviceradar/event_writer/processors/logs.ex [135-147]](https://github.com/carverauto/serviceradar/pull/2358/files#diff-60c56675b539f19f1cce7702a571d883df17ce5c20a32bdc95afcd329b0b0efdR135-R147) ```diff defp parse_scope_attributes(json) when is_map(json) do json |> FieldParser.get_field("scope_attributes", "scopeAttributes") - |> case do - nil -> json["scope"] - value -> value - end |> FieldParser.encode_jsonb() |> case do - nil -> %{} - value -> value + nil -> + json + |> Map.get("scope") + |> FieldParser.encode_jsonb() + |> case do + nil -> %{} + value -> value + end + + value -> + value end end ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=3 --> <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly identifies a logic flaw where falling back to `json["scope"]` might result in an empty map if `scope` is a string. The proposed fix correctly handles this edge case. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Preserve epoch for zero timestamp</summary> ___ **In <code>safeTimeFromUnixNano</code>, return the Unix epoch for a zero timestamp instead of <br><code>time.Now()</code> to maintain consistent semantics for unset time fields.** [pkg/consumers/db-event-writer/processor.go [166-173]](https://github.com/carverauto/serviceradar/pull/2358/files#diff-c55d73b621975e3797271d69fc43b78fa44eb184437392f9e40e18d4568589a8R166-R173) ```diff func safeTimeFromUnixNano(value uint64) time.Time { if value == 0 { - return time.Now().UTC() + return time.Unix(0, 0).UTC() } if value <= uint64(maxInt64) { return time.Unix(0, int64(value)) } // overflow handling... } ``` `[To ensure code accuracy, apply this suggestion manually]` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly points out that using `time.Now()` for a zero timestamp is incorrect. Returning the Unix epoch is a more sensible default and aligns with how `ObservedTimestamp` is handled. </details></details></td><td align=center>Medium </td></tr><tr><td rowspan=1>General</td> <td> <details><summary>Unify observed timestamp key</summary> ___ **In <code>otel_logs.rs</code>, rename the <code>observed_time_unix_nano</code> JSON field to <br><code>observed_timestamp</code> to ensure consistency across services.** [cmd/consumers/zen/src/otel_logs.rs [75-79]](https://github.com/carverauto/serviceradar/pull/2358/files#diff-4542ace311c2de0c5f9389169b9b4b3429e972dde67ea1c6efb9dff04ec1c725R75-R79) ```diff let mut log_json = json!({ "timestamp": timestamp, - "observed_time_unix_nano": log_record.observed_time_unix_nano, + "observed_timestamp": log_record.observed_time_unix_nano, // ... }); ``` `[To ensure code accuracy, apply this suggestion manually]` <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: This suggestion correctly proposes standardizing the `observed_timestamp` field name for consistency with other services in the PR, which improves maintainability and reduces parsing complexity downstream. </details></details></td><td align=center>Medium </td></tr> <tr><td align="center" colspan="2"> - [ ] More <!-- /improve --more_suggestions=true --> </td><td></td></tr></tbody></table>
Sign in to join this conversation.
No reviewers
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
carverauto/serviceradar!2703
No description provided.