1942 build cnpg with age and timescale extensions #2413

Merged
mfreeman451 merged 2 commits from refs/pull/2413/head into main 2025-11-15 03:08:50 +00:00
mfreeman451 commented 2025-11-15 03:04:40 +00:00 (Migrated from github.com)
Owner

Imported from GitHub pull request.

Original GitHub pull request: #1943
Original author: @mfreeman451
Original URL: https://github.com/carverauto/serviceradar/pull/1943
Original created: 2025-11-15T03:04:40Z
Original updated: 2025-11-15T03:09:16Z
Original head: carverauto/serviceradar:1942-build-cnpg-with-age-and-timescale-extensions
Original base: main
Original merged: 2025-11-15T03:08:50Z by @mfreeman451

User description

IMPORTANT: Please sign the Developer Certificate of Origin

Thank you for your contribution to ServiceRadar. Please note, when contributing, the developer must include
a DCO sign-off statement indicating the DCO acceptance in one commit message. Here
is an example DCO Signed-off-by line in a commit message:

Signed-off-by: J. Doe <j.doe@domain.com>

Describe your changes

Code checklist before requesting a review

  • I have signed the DCO?
  • The build completes without errors?
  • All tests are passing when running make test?

PR Type

Enhancement


Description

  • Build custom CNPG image with TimescaleDB and Apache AGE extensions

  • Add Python helpers for OCI rootfs extraction with whiteout handling

  • Update SPIRE deployment manifests to use custom image with extensions

  • Document clean rebuild procedure for CNPG cluster with new image


Diagram Walkthrough

flowchart LR
  A["CloudNativePG<br/>PostgreSQL 16.6"] -->|extract rootfs| B["CNPG Rootfs<br/>Tarball"]
  B -->|overlay debs| C["Add Dev<br/>Headers"]
  C -->|compile| D["TimescaleDB<br/>Extension"]
  C -->|compile| E["Apache AGE<br/>Extension"]
  D -->|layer| F["Custom CNPG<br/>Image"]
  E -->|layer| F
  F -->|deploy| G["SPIRE CNPG<br/>Cluster"]
  G -->|init SQL| H["Extensions<br/>Enabled"]

File Walkthrough

Relevant files
Enhancement
11 files
pg_config_wrapper.sh
Wrapper script for PostgreSQL config path rewriting           
+9/-0     
pg_config_rewrite.py
Rewrite pg_config paths for custom root directory               
+24/-0   
extract_rootfs.py
Extract container rootfs with OCI whiteout handling           
+163/-0 
export_rootfs_from_layout.py
Export OCI image layout to rootfs tarball                               
+111/-0 
overlay_deb_packages.py
Overlay Debian packages into extracted rootfs                       
+185/-0 
repo_alias.bzl
Bazel repository alias rule for Bzlmod compatibility         
+15/-0   
push_targets.bzl
Refactor push targets to dict format with CNPG image         
+31/-27 
BUILD.bazel
Add CNPG image build with TimescaleDB and AGE layers         
+176/-0 
spire-postgres.yaml
Configure CNPG cluster with custom image and extensions   
+44/-1   
spire-server.yaml
Update SPIRE server to use renamed cnpg cluster                   
+1/-1     
cnpg-cluster.yaml
Configure demo CNPG cluster with custom image and extensions
+12/-1   
Configuration changes
4 files
BUILD.bazel
Build package placeholder for repository alias                     
+2/-0     
values.yaml
Add CNPG image configuration and extension parameters       
+11/-1   
kustomization.yaml
Rename CNPG cluster manifest reference                                     
+1/-1     
server-configmap.yaml
Update SPIRE server database connection to cnpg cluster   
+1/-1     
Dependencies
3 files
MODULE.bazel
Add CNPG PostgreSQL 16.6 and extension source dependencies
+104/-6 
BUILD.bazel
Add crane binary filegroup for OCI export                               
+6/-0     
timescaledb
Add TimescaleDB source repository submodule                           
+1/-0     
Documentation
8 files
README.md
Document CNPG rebuild with TimescaleDB and AGE                     
+66/-2   
agents.md
Add SPIRE CNPG cluster rebuild runbook                                     
+49/-0   
onboarding-review-2025.md
Update CNPG cluster name reference in documentation           
+1/-1     
spiffe-identity.md
Update CNPG cluster name and description                                 
+1/-1     
project.md
Expand project context with comprehensive architecture details
+48/-10 
proposal.md
OpenSpec proposal for CNPG with extensions                             
+16/-0   
spec.md
OpenSpec requirements for CNPG image and deployment           
+40/-0   
tasks.md
OpenSpec task checklist for CNPG implementation                   
+16/-0   

Imported from GitHub pull request. Original GitHub pull request: #1943 Original author: @mfreeman451 Original URL: https://github.com/carverauto/serviceradar/pull/1943 Original created: 2025-11-15T03:04:40Z Original updated: 2025-11-15T03:09:16Z Original head: carverauto/serviceradar:1942-build-cnpg-with-age-and-timescale-extensions Original base: main Original merged: 2025-11-15T03:08:50Z by @mfreeman451 --- ### **User description** ## IMPORTANT: Please sign the Developer Certificate of Origin Thank you for your contribution to ServiceRadar. Please note, when contributing, the developer must include a [DCO sign-off statement]( https://developercertificate.org/) indicating the DCO acceptance in one commit message. Here is an example DCO Signed-off-by line in a commit message: ``` Signed-off-by: J. Doe <j.doe@domain.com> ``` ## Describe your changes ## Issue ticket number and link ## Code checklist before requesting a review - [ ] I have signed the DCO? - [ ] The build completes without errors? - [ ] All tests are passing when running make test? ___ ### **PR Type** Enhancement ___ ### **Description** - Build custom CNPG image with TimescaleDB and Apache AGE extensions - Add Python helpers for OCI rootfs extraction with whiteout handling - Update SPIRE deployment manifests to use custom image with extensions - Document clean rebuild procedure for CNPG cluster with new image ___ ### Diagram Walkthrough ```mermaid flowchart LR A["CloudNativePG<br/>PostgreSQL 16.6"] -->|extract rootfs| B["CNPG Rootfs<br/>Tarball"] B -->|overlay debs| C["Add Dev<br/>Headers"] C -->|compile| D["TimescaleDB<br/>Extension"] C -->|compile| E["Apache AGE<br/>Extension"] D -->|layer| F["Custom CNPG<br/>Image"] E -->|layer| F F -->|deploy| G["SPIRE CNPG<br/>Cluster"] G -->|init SQL| H["Extensions<br/>Enabled"] ``` <details> <summary><h3> File Walkthrough</h3></summary> <table><thead><tr><th></th><th align="left">Relevant files</th></tr></thead><tbody><tr><td><strong>Enhancement</strong></td><td><details><summary>11 files</summary><table> <tr> <td><strong>pg_config_wrapper.sh</strong><dd><code>Wrapper script for PostgreSQL config path rewriting</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-5e8abf7a407e9097516d40fa6bd6ef224c6c16f33ff5c2adfa0c89c72b1f5c36">+9/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>pg_config_rewrite.py</strong><dd><code>Rewrite pg_config paths for custom root directory</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-2705111d74fe979d411e34655339255ec1d326a49a4c99873d20dcee2f55ff50">+24/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>extract_rootfs.py</strong><dd><code>Extract container rootfs with OCI whiteout handling</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-6c10e8eaf5d24bf3487572e0b9734f5f9547aace54cb34ce6db7aff93c276af0">+163/-0</a>&nbsp; </td> </tr> <tr> <td><strong>export_rootfs_from_layout.py</strong><dd><code>Export OCI image layout to rootfs tarball</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-9c76cdc6de95fa143440d8d9178b27411fa481fc453740413cb011031cb4476f">+111/-0</a>&nbsp; </td> </tr> <tr> <td><strong>overlay_deb_packages.py</strong><dd><code>Overlay Debian packages into extracted rootfs</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-d6c610461d90e52b8ca9b724649653fe763a52e75900e82e41e77a0a65ec7b77">+185/-0</a>&nbsp; </td> </tr> <tr> <td><strong>repo_alias.bzl</strong><dd><code>Bazel repository alias rule for Bzlmod compatibility</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-ae383be7883d457db9fe3d1560fce07bb64fd73b81e0666258282589339e396e">+15/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>push_targets.bzl</strong><dd><code>Refactor push targets to dict format with CNPG image</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-4af33fe62caba04b6d479589c16cfb85babc39bae5c92595d4d4e31660738513">+31/-27</a>&nbsp; </td> </tr> <tr> <td><strong>BUILD.bazel</strong><dd><code>Add CNPG image build with TimescaleDB and AGE layers</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-0e4db31c224a8f72ae8e870a849e38a59d74a2c7f7b04347b0b3eb07e20c5a80">+176/-0</a>&nbsp; </td> </tr> <tr> <td><strong>spire-postgres.yaml</strong><dd><code>Configure CNPG cluster with custom image and extensions</code>&nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-1ca173bb2b8cb20ea64a4f1901850a229c0c9f40286f074e00b3301be99299bd">+44/-1</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>spire-server.yaml</strong><dd><code>Update SPIRE server to use renamed cnpg cluster</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-7959f7b987adcd56306fe5ddf31fed367dd9bb995f9b82a8fa53553dac8e7077">+1/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>cnpg-cluster.yaml</strong><dd><code>Configure demo CNPG cluster with custom image and extensions</code></dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-7295774b8f05fee8f0f2b054f94381aa4c2581344117e9386f62c50baf64de53">+12/-1</a>&nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Configuration changes</strong></td><td><details><summary>4 files</summary><table> <tr> <td><strong>BUILD.bazel</strong><dd><code>Build package placeholder for repository alias</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-0e80ea46aeb61a873324685edb96eae864c7a2004fbb7ee404b4ec951190ba10">+2/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>values.yaml</strong><dd><code>Add CNPG image configuration and extension parameters</code>&nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-d4449c7cb70362554b274f81eae5a4b81a8e81df494282e383d1b7ea3871c452">+11/-1</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>kustomization.yaml</strong><dd><code>Rename CNPG cluster manifest reference</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-da7153de0d369673a73265d55929b82c3fa51a1c6f6482ad28c1c0e65ffe23db">+1/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>server-configmap.yaml</strong><dd><code>Update SPIRE server database connection to cnpg cluster</code>&nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-fddd686470ef8dfa3150ef3cb10d20a5eb67c2352eb7d630a89431b1699a5c56">+1/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Dependencies</strong></td><td><details><summary>3 files</summary><table> <tr> <td><strong>MODULE.bazel</strong><dd><code>Add CNPG PostgreSQL 16.6 and extension source dependencies</code></dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-6136fc12446089c3db7360e923203dd114b6a1466252e71667c6791c20fe6bdc">+104/-6</a>&nbsp; </td> </tr> <tr> <td><strong>BUILD.bazel</strong><dd><code>Add crane binary filegroup for OCI export</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-58acc9348755d3a8358b11e94ebf4d153df1f598a6910aa16c40162c53796332">+6/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>timescaledb</strong><dd><code>Add TimescaleDB source repository submodule</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-6ffe9bce28cdff262b30c342183ba88015e3cd094cb35d105067c518430534cb">+1/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Documentation</strong></td><td><details><summary>8 files</summary><table> <tr> <td><strong>README.md</strong><dd><code>Document CNPG rebuild with TimescaleDB and AGE</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-8b4f234b5a5b8a166a37e898facf84394962c1e6754c0ec13833b2a3cc400f7c">+66/-2</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>agents.md</strong><dd><code>Add SPIRE CNPG cluster rebuild runbook</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-af8d04277f2353629065b0cc5fad3e44bd3e7c20339bd125e0812104bdbeff28">+49/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>onboarding-review-2025.md</strong><dd><code>Update CNPG cluster name reference in documentation</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-1728c73ae8e684d13b7b166b90cbfc5363c0794dcd63f7fa3733e1c624940e98">+1/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>spiffe-identity.md</strong><dd><code>Update CNPG cluster name and description</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-6a3f0bdce99eb15d1075805cb5101594398f81ad92aa4eea9b27bb7545098db1">+1/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>project.md</strong><dd><code>Expand project context with comprehensive architecture details</code></dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-10c431237ea8388147f27e3a0750ece7b3be53ca35a986acf49b6565508e012e">+48/-10</a>&nbsp; </td> </tr> <tr> <td><strong>proposal.md</strong><dd><code>OpenSpec proposal for CNPG with extensions</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-1a8d714a449bc5fec14827178b6d8536dc07bd25af61ccf718ae7bfda813151b">+16/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>spec.md</strong><dd><code>OpenSpec requirements for CNPG image and deployment</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-dcccb723c856ccab6033f1abab00875ca7bc05211ee822ec23af6fc41b887388">+40/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>tasks.md</strong><dd><code>OpenSpec task checklist for CNPG implementation</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-e7c0062be5c99e2a5ac6086b3cab26e8d89bf7c63edd593ff19976ab5a8382f8">+16/-0</a>&nbsp; &nbsp; </td> </tr> </table></details></td></tr></tr></tbody></table> </details> ___
qodo-code-review[bot] commented 2025-11-15 03:05:27 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/1943#issuecomment-3535480600
Original created: 2025-11-15T03:05:27Z

PR Compliance Guide 🔍

(Compliance updated until commit github.com/carverauto/serviceradar@c848a4fda2)

Below is a summary of compliance checks for this PR:

Security Compliance
Archive extraction traversal

Description: The script creates files, symlinks, and hardlinks from tar members into a destination
directory without enforcing a chroot/jail, which could allow path confusion within the
work directory if a malicious tar is processed (e.g., symlink traversal combined with
later file writes).
extract_rootfs.py [121-154]

Referred Code

with tarfile.open(tarball, "r:*") as archive:
    pending_links = []
    dir_perms = []
    for member in archive:
        if member.isfile() or member.isdir() or member.issym() or member.islnk():
            relpath = _normalize_member_name(member.name)
            if relpath is None:
                continue
            if _apply_whiteout(dest, relpath):
                continue
            if member.islnk():
                link_target = _normalize_member_name(member.linkname)
                if link_target is None:
                    continue
                pending_links.append((link_target, relpath))
                continue
            _extract_member(archive, member, dest, relpath, dir_perms)

    for source_rel, dest_rel in pending_links:
        source_path = os.path.join(dest, source_rel)


 ... (clipped 13 lines)
Symlink write redirection

Description: Extracts tar payloads from .deb archives into a rootfs directory and honors
symlinks/hardlinks without sandboxing; while it normalizes paths to prevent ../ escapes,
writing through attacker-controlled symlinks inside the destination could still redirect
writes within the rootfs tree.
overlay_deb_packages.py [88-120]

Referred Code
def _extract_tar_stream(stream: BinaryIO, dest: str) -> None:
    pending_links: List[Tuple[str, str]] = []
    dir_perms: List[Tuple[str, int]] = []
    with tarfile.open(fileobj=stream, mode="r:*") as archive:
        for member in archive:
            if not (member.isfile() or member.isdir() or member.issym() or member.islnk()):
                continue
            relpath = _normalize_member_name(member.name)
            if relpath is None:
                continue
            if member.islnk():
                link_target = _normalize_member_name(member.linkname)
                if link_target is None:
                    continue
                pending_links.append((link_target, relpath))
                continue
            _extract_member(archive, member, dest, relpath, dir_perms)

    _apply_hardlinks(dest, pending_links)
    _apply_dir_perms(dir_perms)



 ... (clipped 12 lines)
Ticket Compliance
🟡
🎫 #1942
🟢 Build and publish a custom CloudNativePG (CNPG) PostgreSQL 16.6 image that includes
TimescaleDB and Apache AGE extensions.
Update SPIRE CNPG deployment (demo kustomize and Helm) to use the custom image and enable
both extensions via shared_preload_libraries and init SQL.
Provide build tooling to extract/overlay CNPG rootfs and compile extensions reproducibly
(including whiteout handling and local crane fallback).
Document a clean rebuild procedure for the SPIRE CNPG cluster using the new image and
verifying extension availability.
Integrate the new CNPG image into the repository’s image push/publish flow with
appropriate tags.
Verify at runtime that the built CNPG image successfully loads TimescaleDB and AGE (CREATE
EXTENSION succeeds) on a real cluster.
Confirm Helm-rendered manifests deploy correctly across environments and that image pulls
succeed with provided imagePullSecrets.
Validate performance/stability impact of shared_preload_libraries settings under
production-like load.
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

🔴
Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Weak error handling: Several code paths raise generic ValueError or continue silently (e.g., missing hardlink
sources) without contextual logging or handling for edge cases, reducing diagnosability
when overlays fail.

Referred Code
def _apply_hardlinks(dest: str, pending_links: List[Tuple[str, str]]) -> None:
    for source_rel, dest_rel in pending_links:
        source_path = os.path.join(dest, source_rel)
        target_path = os.path.join(dest, dest_rel)
        if not os.path.exists(source_path):
            continue
        _ensure_parent(target_path)
        if os.path.lexists(target_path):
            os.unlink(target_path)
        os.link(source_path, target_path)


def _apply_dir_perms(dir_perms: List[Tuple[str, int]]) -> None:
    for path, mode in dir_perms:
        if os.path.exists(path):
            os.chmod(path, mode)


def _extract_tar_stream(stream: BinaryIO, dest: str) -> None:
    pending_links: List[Tuple[str, str]] = []
    dir_perms: List[Tuple[str, int]] = []


 ... (clipped 30 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status:
Missing audit logs: New helper scripts perform filesystem extraction and OCI layer processing without emitting
structured logs of critical actions or outcomes, making it unclear who did what and when
during rootfs/export operations.

Referred Code
def main() -> None:
    parser = argparse.ArgumentParser(description="Create a rootfs tarball from an OCI layout.")
    parser.add_argument("--layout", required=True, help="Path to the OCI layout directory.")
    parser.add_argument("--output", required=True, help="Path to the output tarball.")
    args = parser.parse_args()

    layout_dir = os.path.abspath(args.layout)
    output_path = os.path.abspath(args.output)

    work_dir = tempfile.mkdtemp(prefix="cnpg_rootfs_")
    try:
        for layer in _layout_layers(layout_dir):
            _extract_layer(layer, work_dir)
        os.makedirs(os.path.dirname(output_path), exist_ok=True)
        _write_tarball(work_dir, output_path)
    finally:
        shutil.rmtree(work_dir, ignore_errors=True)


if __name__ == "__main__":
    main()

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status:
Verbose traceback: On failure the script prints full stack traces and exception messages to stderr which may
expose internal paths or details if surfaced to users of automated build systems.

Referred Code

if __name__ == "__main__":
    try:
        main()
    except Exception as exc:  # pragma: no cover - genrule helper
        traceback.print_exc()
        print(f"extract_rootfs: {exc}", file=sys.stderr)
        sys.exit(1)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status:
Path sanitization: While normalization exists, the extraction processes write files and symlinks from
external archives without additional validation or sandboxing beyond simple path checks,
which may warrant further review for traversal and link handling safety.

Referred Code
def _extract_tar_stream(stream: BinaryIO, dest: str) -> None:
    pending_links: List[Tuple[str, str]] = []
    dir_perms: List[Tuple[str, int]] = []
    with tarfile.open(fileobj=stream, mode="r:*") as archive:
        for member in archive:
            if not (member.isfile() or member.isdir() or member.issym() or member.islnk()):
                continue
            relpath = _normalize_member_name(member.name)
            if relpath is None:
                continue
            if member.islnk():
                link_target = _normalize_member_name(member.linkname)
                if link_target is None:
                    continue
                pending_links.append((link_target, relpath))
                continue
            _extract_member(archive, member, dest, relpath, dir_perms)

    _apply_hardlinks(dest, pending_links)
    _apply_dir_perms(dir_perms)



 ... (clipped 9 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
- Requires Further Human Verification
🏷️ - Compliance label

Previous compliance checks

Compliance check up to commit c848a4f
Security Compliance
Path traversal/whiteout delete

Description: Whiteout handling deletes arbitrary filesystem paths during extraction which, if pointed
at an unintended directory (via crafted tar entries), could remove files within the
extraction root; ensure all paths are strictly confined to the destination and inputs are
trusted.
extract_rootfs.py [68-86]

Referred Code
def _apply_whiteout(dest: str, relpath: str) -> bool:
    basename = posixpath.basename(relpath)
    parent = posixpath.dirname(relpath)

    if basename == ".wh..wh..opq":
        target_dir = os.path.join(dest, parent)
        if os.path.isdir(target_dir):
            for entry in os.listdir(target_dir):
                _remove_path(os.path.join(target_dir, entry))
        return True

    if basename.startswith(".wh."):
        target_rel = posixpath.join(parent, basename[4:])
        target_path = os.path.join(dest, target_rel)
        if os.path.exists(target_path) or os.path.islink(target_path):
            _remove_path(target_path)
        return True

    return False
Untrusted archive extraction

Description: The .deb ar parsing trusts entry headers and extracts data.tar.* contents, which if
untrusted could write arbitrary files within the rootfs; ensure packages are verified
(e.g., checksum/signature) and paths remain confined to the destination.
overlay_deb_packages.py [120-149]

Referred Code


def _apply_deb_package(dest: str, package_path: str) -> None:
    with open(package_path, "rb") as deb_file:
        header = deb_file.read(8)
        if header != b"!<arch>\n":
            raise ValueError(f"{package_path} is not an ar archive")
        while True:
            entry_header = deb_file.read(60)
            if not entry_header:
                break
            if len(entry_header) != 60:
                raise ValueError(f"Corrupt ar header in {package_path}")
            name = entry_header[:16].decode("utf-8").strip()
            size_str = entry_header[48:58].decode("utf-8").strip()
            try:
                size = int(size_str)
            except ValueError as exc:
                raise ValueError(f"Invalid size in {package_path}") from exc
            data = deb_file.read(size)
            if size % 2 == 1:


 ... (clipped 9 lines)
Ticket Compliance
🟡
🎫 #1942
🟢 Build a custom CloudNativePG (CNPG) PostgreSQL 16.6 image that includes TimescaleDB and
Apache AGE extensions.
Integrate the custom CNPG image into the SPIRE deployment (demo kustomize and Helm),
enabling the extensions via shared_preload_libraries and init SQL.
Provide build tooling to extract OCI rootfs and overlay build dependencies, handling
whiteouts correctly and working without remote crane on executors.
Ensure CI/publish flow can push the custom CNPG image to GHCR with a stable tag.
Document an operator runbook for cleanly rebuilding the SPIRE CNPG cluster using the new
image and verifying extensions.
Validate at runtime that timescaledb and age can be created and loaded in a running pod
using the produced image.
Confirm CI successfully publishes the image to GHCR with the specified tag and digest and
that pull credentials work in target clusters.
Helm and kustomize rendered manifests should be applied to a cluster to ensure the CNPG
operator accepts the parameters and pods become Ready.
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status:
No audit logs: The new scripts and Helm changes perform critical build/deploy actions (rootfs extraction,
package overlay, image selection) without emitting any structured audit logs identifying
user, action, and outcome.

Referred Code
    parser = argparse.ArgumentParser(description="Create a rootfs tarball from an OCI layout.")
    parser.add_argument("--layout", required=True, help="Path to the OCI layout directory.")
    parser.add_argument("--output", required=True, help="Path to the output tarball.")
    args = parser.parse_args()

    layout_dir = os.path.abspath(args.layout)
    output_path = os.path.abspath(args.output)

    work_dir = tempfile.mkdtemp(prefix="cnpg_rootfs_")
    try:
        for layer in _layout_layers(layout_dir):
            _extract_layer(layer, work_dir)
        os.makedirs(os.path.dirname(output_path), exist_ok=True)
        _write_tarball(work_dir, output_path)
    finally:
        shutil.rmtree(work_dir, ignore_errors=True)


if __name__ == "__main__":
    main()

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Limited edge handling: Several helpers raise generic ValueError without contextual logging and do not validate
all edge cases (e.g., ar header parsing, missing data.tar stream) which may hinder
actionable diagnostics in CI.

Referred Code

def _apply_deb_package(dest: str, package_path: str) -> None:
    with open(package_path, "rb") as deb_file:
        header = deb_file.read(8)
        if header != b"!<arch>\n":
            raise ValueError(f"{package_path} is not an ar archive")
        while True:
            entry_header = deb_file.read(60)
            if not entry_header:
                break
            if len(entry_header) != 60:
                raise ValueError(f"Corrupt ar header in {package_path}")
            name = entry_header[:16].decode("utf-8").strip()
            size_str = entry_header[48:58].decode("utf-8").strip()
            try:
                size = int(size_str)
            except ValueError as exc:
                raise ValueError(f"Invalid size in {package_path}") from exc
            data = deb_file.read(size)
            if size % 2 == 1:
                deb_file.seek(1, os.SEEK_CUR)


 ... (clipped 8 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status:
Verbose traceback: On failure the script prints full tracebacks to stderr which may expose internal paths;
consider gating detailed traces behind a debug flag while keeping user-facing messages
generic.

Referred Code
try:
    main()
except Exception as exc:  # pragma: no cover - genrule helper
    traceback.print_exc()
    print(f"extract_rootfs: {exc}", file=sys.stderr)
    sys.exit(1)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status:
Plaintext secret use: The template constructs a plaintext Postgres connection and relies on inline
username/password values, lacking enforcement of TLS or secret referencing best practices
for secure data handling.

Referred Code
stringData:
  username: {{ default "spire" $pg.username }}
  password: {{ default "changeme" $pg.password | quote }}
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: {{ default "cnpg" $pg.clusterName }}
  namespace: {{ $ns }}
spec:
  imageName: {{ $imageName }}
  instances: {{ default 3 $pg.instances }}
  {{- if $pg.imagePullSecrets }}
  imagePullSecrets:
    {{- range $secret := $pg.imagePullSecrets }}
    {{- if kindIs "string" $secret }}
    - name: {{ $secret }}
    {{- else }}
    - name: {{ $secret.name }}
    {{- end }}
    {{- end }}


 ... (clipped 26 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/1943#issuecomment-3535480600 Original created: 2025-11-15T03:05:27Z --- ## PR Compliance Guide 🔍 <!-- https://github.com/carverauto/serviceradar/commit/c848a4fda221b1298206ba0fc0bb3c923d195173 --> #### (Compliance updated until commit https://github.com/carverauto/serviceradar/commit/c848a4fda221b1298206ba0fc0bb3c923d195173) Below is a summary of compliance checks for this PR:<br> <table><tbody><tr><td colspan='2'><strong>Security Compliance</strong></td></tr> <tr><td rowspan=2>⚪</td> <td><details><summary><strong>Archive extraction traversal </strong></summary><br> <b>Description:</b> The script creates files, symlinks, and hardlinks from tar members into a destination <br>directory without enforcing a chroot/jail, which could allow path confusion within the <br>work directory if a malicious tar is processed (e.g., symlink traversal combined with <br>later file writes).<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1943/files#diff-6c10e8eaf5d24bf3487572e0b9734f5f9547aace54cb34ce6db7aff93c276af0R121-R154'>extract_rootfs.py [121-154]</a></strong><br> <details open><summary>Referred Code</summary> ```python with tarfile.open(tarball, "r:*") as archive: pending_links = [] dir_perms = [] for member in archive: if member.isfile() or member.isdir() or member.issym() or member.islnk(): relpath = _normalize_member_name(member.name) if relpath is None: continue if _apply_whiteout(dest, relpath): continue if member.islnk(): link_target = _normalize_member_name(member.linkname) if link_target is None: continue pending_links.append((link_target, relpath)) continue _extract_member(archive, member, dest, relpath, dir_perms) for source_rel, dest_rel in pending_links: source_path = os.path.join(dest, source_rel) ... (clipped 13 lines) ``` </details></details></td></tr> <tr><td><details><summary><strong>Symlink write redirection</strong></summary><br> <b>Description:</b> Extracts tar payloads from .deb archives into a rootfs directory and honors <br>symlinks/hardlinks without sandboxing; while it normalizes paths to prevent ../ escapes, <br>writing through attacker-controlled symlinks inside the destination could still redirect <br>writes within the rootfs tree.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1943/files#diff-d6c610461d90e52b8ca9b724649653fe763a52e75900e82e41e77a0a65ec7b77R88-R120'>overlay_deb_packages.py [88-120]</a></strong><br> <details open><summary>Referred Code</summary> ```python def _extract_tar_stream(stream: BinaryIO, dest: str) -> None: pending_links: List[Tuple[str, str]] = [] dir_perms: List[Tuple[str, int]] = [] with tarfile.open(fileobj=stream, mode="r:*") as archive: for member in archive: if not (member.isfile() or member.isdir() or member.issym() or member.islnk()): continue relpath = _normalize_member_name(member.name) if relpath is None: continue if member.islnk(): link_target = _normalize_member_name(member.linkname) if link_target is None: continue pending_links.append((link_target, relpath)) continue _extract_member(archive, member, dest, relpath, dir_perms) _apply_hardlinks(dest, pending_links) _apply_dir_perms(dir_perms) ... (clipped 12 lines) ``` </details></details></td></tr> <tr><td colspan='2'><strong>Ticket Compliance</strong></td></tr> <tr><td>🟡</td> <td> <details> <summary>🎫 <a href=https://github.com/carverauto/serviceradar/issues/1942>#1942</a></summary> <table width='100%'><tbody> <tr><td rowspan=5>🟢</td> <td>Build and publish a custom CloudNativePG (CNPG) PostgreSQL 16.6 image that includes <br>TimescaleDB and Apache AGE extensions.</td></tr> <tr><td>Update SPIRE CNPG deployment (demo kustomize and Helm) to use the custom image and enable <br>both extensions via shared_preload_libraries and init SQL.</td></tr> <tr><td>Provide build tooling to extract/overlay CNPG rootfs and compile extensions reproducibly <br>(including whiteout handling and local crane fallback).</td></tr> <tr><td>Document a clean rebuild procedure for the SPIRE CNPG cluster using the new image and <br>verifying extension availability.</td></tr> <tr><td>Integrate the new CNPG image into the repository’s image push/publish flow with <br>appropriate tags.</td></tr> <tr><td rowspan=3>⚪</td> <td>Verify at runtime that the built CNPG image successfully loads TimescaleDB and AGE (CREATE <br>EXTENSION succeeds) on a real cluster.</td></tr> <tr><td>Confirm Helm-rendered manifests deploy correctly across environments and that image pulls <br>succeed with provided imagePullSecrets.</td></tr> <tr><td>Validate performance/stability impact of shared_preload_libraries settings under <br>production-like load.</td></tr> </tbody></table> </details> </td></tr> <tr><td colspan='2'><strong>Codebase Duplication Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary><strong>Codebase context is not defined </strong></summary> Follow the <a href='https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/'>guide</a> to enable codebase context checks. </details></td></tr> <tr><td colspan='2'><strong>Custom Compliance</strong></td></tr> <tr><td rowspan=2>🟢</td><td> <details><summary><strong>Generic: Meaningful Naming and Self-Documenting Code</strong></summary><br> **Objective:** Ensure all identifiers clearly express their purpose and intent, making code <br>self-documenting<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td> <details><summary><strong>Generic: Secure Logging Practices</strong></summary><br> **Objective:** To ensure logs are useful for debugging and auditing without exposing sensitive <br>information like PII, PHI, or cardholder data.<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td rowspan=1>🔴</td> <td><details> <summary><strong>Generic: Robust Error Handling and Edge Case Management</strong></summary><br> **Objective:** Ensure comprehensive error handling that provides meaningful context and graceful <br>degradation<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1943/files#diff-d6c610461d90e52b8ca9b724649653fe763a52e75900e82e41e77a0a65ec7b77R70-R120'><strong>Weak error handling</strong></a>: Several code paths raise generic ValueError or continue silently (e.g., missing hardlink <br>sources) without contextual logging or handling for edge cases, reducing diagnosability <br>when overlays fail.<br> <details open><summary>Referred Code</summary> ```python def _apply_hardlinks(dest: str, pending_links: List[Tuple[str, str]]) -> None: for source_rel, dest_rel in pending_links: source_path = os.path.join(dest, source_rel) target_path = os.path.join(dest, dest_rel) if not os.path.exists(source_path): continue _ensure_parent(target_path) if os.path.lexists(target_path): os.unlink(target_path) os.link(source_path, target_path) def _apply_dir_perms(dir_perms: List[Tuple[str, int]]) -> None: for path, mode in dir_perms: if os.path.exists(path): os.chmod(path, mode) def _extract_tar_stream(stream: BinaryIO, dest: str) -> None: pending_links: List[Tuple[str, str]] = [] dir_perms: List[Tuple[str, int]] = [] ... (clipped 30 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td rowspan=3>⚪</td> <td><details> <summary><strong>Generic: Comprehensive Audit Trails</strong></summary><br> **Objective:** To create a detailed and reliable record of critical system actions for security analysis <br>and compliance.<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1943/files#diff-9c76cdc6de95fa143440d8d9178b27411fa481fc453740413cb011031cb4476fR91-R111'><strong>Missing audit logs</strong></a>: New helper scripts perform filesystem extraction and OCI layer processing without emitting <br>structured logs of critical actions or outcomes, making it unclear who did what and when <br>during rootfs/export operations.<br> <details open><summary>Referred Code</summary> ```python def main() -> None: parser = argparse.ArgumentParser(description="Create a rootfs tarball from an OCI layout.") parser.add_argument("--layout", required=True, help="Path to the OCI layout directory.") parser.add_argument("--output", required=True, help="Path to the output tarball.") args = parser.parse_args() layout_dir = os.path.abspath(args.layout) output_path = os.path.abspath(args.output) work_dir = tempfile.mkdtemp(prefix="cnpg_rootfs_") try: for layer in _layout_layers(layout_dir): _extract_layer(layer, work_dir) os.makedirs(os.path.dirname(output_path), exist_ok=True) _write_tarball(work_dir, output_path) finally: shutil.rmtree(work_dir, ignore_errors=True) if __name__ == "__main__": main() ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td><details> <summary><strong>Generic: Secure Error Handling</strong></summary><br> **Objective:** To prevent the leakage of sensitive system information through error messages while <br>providing sufficient detail for internal debugging.<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1943/files#diff-6c10e8eaf5d24bf3487572e0b9734f5f9547aace54cb34ce6db7aff93c276af0R156-R163'><strong>Verbose traceback</strong></a>: On failure the script prints full stack traces and exception messages to stderr which may <br>expose internal paths or details if surfaced to users of automated build systems.<br> <details open><summary>Referred Code</summary> ```python if __name__ == "__main__": try: main() except Exception as exc: # pragma: no cover - genrule helper traceback.print_exc() print(f"extract_rootfs: {exc}", file=sys.stderr) sys.exit(1) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td><details> <summary><strong>Generic: Security-First Input Validation and Data Handling</strong></summary><br> **Objective:** Ensure all data inputs are validated, sanitized, and handled securely to prevent <br>vulnerabilities<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1943/files#diff-d6c610461d90e52b8ca9b724649653fe763a52e75900e82e41e77a0a65ec7b77R88-R117'><strong>Path sanitization</strong></a>: While normalization exists, the extraction processes write files and symlinks from <br>external archives without additional validation or sandboxing beyond simple path checks, <br>which may warrant further review for traversal and link handling safety.<br> <details open><summary>Referred Code</summary> ```python def _extract_tar_stream(stream: BinaryIO, dest: str) -> None: pending_links: List[Tuple[str, str]] = [] dir_perms: List[Tuple[str, int]] = [] with tarfile.open(fileobj=stream, mode="r:*") as archive: for member in archive: if not (member.isfile() or member.isdir() or member.issym() or member.islnk()): continue relpath = _normalize_member_name(member.name) if relpath is None: continue if member.islnk(): link_target = _normalize_member_name(member.linkname) if link_target is None: continue pending_links.append((link_target, relpath)) continue _extract_member(archive, member, dest, relpath, dir_perms) _apply_hardlinks(dest, pending_links) _apply_dir_perms(dir_perms) ... (clipped 9 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td align="center" colspan="2"> <!-- placeholder --> <!-- /compliance --update_compliance=true --> </td></tr></tbody></table> <details><summary>Compliance status legend</summary> 🟢 - Fully Compliant<br> 🟡 - Partial Compliant<br> 🔴 - Not Compliant<br> ⚪ - Requires Further Human Verification<br> 🏷️ - Compliance label<br> </details> ___ #### Previous compliance checks <details> <summary>Compliance check up to commit <a href='https://github.com/carverauto/serviceradar/commit/c848a4fda221b1298206ba0fc0bb3c923d195173'>c848a4f</a></summary><br> <table><tbody><tr><td colspan='2'><strong>Security Compliance</strong></td></tr> <tr><td rowspan=2>⚪</td> <td><details><summary><strong>Path traversal/whiteout delete </strong></summary><br> <b>Description:</b> Whiteout handling deletes arbitrary filesystem paths during extraction which, if pointed <br>at an unintended directory (via crafted tar entries), could remove files within the <br>extraction root; ensure all paths are strictly confined to the destination and inputs are <br>trusted.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1943/files#diff-6c10e8eaf5d24bf3487572e0b9734f5f9547aace54cb34ce6db7aff93c276af0R68-R86'>extract_rootfs.py [68-86]</a></strong><br> <details open><summary>Referred Code</summary> ```python def _apply_whiteout(dest: str, relpath: str) -> bool: basename = posixpath.basename(relpath) parent = posixpath.dirname(relpath) if basename == ".wh..wh..opq": target_dir = os.path.join(dest, parent) if os.path.isdir(target_dir): for entry in os.listdir(target_dir): _remove_path(os.path.join(target_dir, entry)) return True if basename.startswith(".wh."): target_rel = posixpath.join(parent, basename[4:]) target_path = os.path.join(dest, target_rel) if os.path.exists(target_path) or os.path.islink(target_path): _remove_path(target_path) return True return False ``` </details></details></td></tr> <tr><td><details><summary><strong>Untrusted archive extraction</strong></summary><br> <b>Description:</b> The .deb ar parsing trusts entry headers and extracts data.tar.* contents, which if <br>untrusted could write arbitrary files within the rootfs; ensure packages are verified <br>(e.g., checksum/signature) and paths remain confined to the destination.<br> <strong><a href='https://github.com/carverauto/serviceradar/pull/1943/files#diff-d6c610461d90e52b8ca9b724649653fe763a52e75900e82e41e77a0a65ec7b77R120-R149'>overlay_deb_packages.py [120-149]</a></strong><br> <details open><summary>Referred Code</summary> ```python def _apply_deb_package(dest: str, package_path: str) -> None: with open(package_path, "rb") as deb_file: header = deb_file.read(8) if header != b"!<arch>\n": raise ValueError(f"{package_path} is not an ar archive") while True: entry_header = deb_file.read(60) if not entry_header: break if len(entry_header) != 60: raise ValueError(f"Corrupt ar header in {package_path}") name = entry_header[:16].decode("utf-8").strip() size_str = entry_header[48:58].decode("utf-8").strip() try: size = int(size_str) except ValueError as exc: raise ValueError(f"Invalid size in {package_path}") from exc data = deb_file.read(size) if size % 2 == 1: ... (clipped 9 lines) ``` </details></details></td></tr> <tr><td colspan='2'><strong>Ticket Compliance</strong></td></tr> <tr><td>🟡</td> <td> <details> <summary>🎫 <a href=https://github.com/carverauto/serviceradar/issues/1942>#1942</a></summary> <table width='100%'><tbody> <tr><td rowspan=5>🟢</td> <td>Build a custom CloudNativePG (CNPG) PostgreSQL 16.6 image that includes TimescaleDB and <br>Apache AGE extensions.</td></tr> <tr><td>Integrate the custom CNPG image into the SPIRE deployment (demo kustomize and Helm), <br>enabling the extensions via shared_preload_libraries and init SQL.</td></tr> <tr><td>Provide build tooling to extract OCI rootfs and overlay build dependencies, handling <br>whiteouts correctly and working without remote crane on executors.</td></tr> <tr><td>Ensure CI/publish flow can push the custom CNPG image to GHCR with a stable tag.</td></tr> <tr><td>Document an operator runbook for cleanly rebuilding the SPIRE CNPG cluster using the new <br>image and verifying extensions.</td></tr> <tr><td rowspan=3>⚪</td> <td>Validate at runtime that timescaledb and age can be created and loaded in a running pod <br>using the produced image.</td></tr> <tr><td>Confirm CI successfully publishes the image to GHCR with the specified tag and digest and <br>that pull credentials work in target clusters.</td></tr> <tr><td>Helm and kustomize rendered manifests should be applied to a cluster to ensure the CNPG <br>operator accepts the parameters and pods become Ready.</td></tr> </tbody></table> </details> </td></tr> <tr><td colspan='2'><strong>Codebase Duplication Compliance</strong></td></tr> <tr><td>⚪</td><td><details><summary><strong>Codebase context is not defined </strong></summary> Follow the <a href='https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/'>guide</a> to enable codebase context checks. </details></td></tr> <tr><td colspan='2'><strong>Custom Compliance</strong></td></tr> <tr><td rowspan=2>🟢</td><td> <details><summary><strong>Generic: Meaningful Naming and Self-Documenting Code</strong></summary><br> **Objective:** Ensure all identifiers clearly express their purpose and intent, making code <br>self-documenting<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td> <details><summary><strong>Generic: Secure Logging Practices</strong></summary><br> **Objective:** To ensure logs are useful for debugging and auditing without exposing sensitive <br>information like PII, PHI, or cardholder data.<br> **Status:** Passed<br> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td rowspan=4>⚪</td> <td><details> <summary><strong>Generic: Comprehensive Audit Trails</strong></summary><br> **Objective:** To create a detailed and reliable record of critical system actions for security analysis <br>and compliance.<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1943/files#diff-9c76cdc6de95fa143440d8d9178b27411fa481fc453740413cb011031cb4476fR92-R111'><strong>No audit logs</strong></a>: The new scripts and Helm changes perform critical build/deploy actions (rootfs extraction, <br>package overlay, image selection) without emitting any structured audit logs identifying <br>user, action, and outcome.<br> <details open><summary>Referred Code</summary> ```python parser = argparse.ArgumentParser(description="Create a rootfs tarball from an OCI layout.") parser.add_argument("--layout", required=True, help="Path to the OCI layout directory.") parser.add_argument("--output", required=True, help="Path to the output tarball.") args = parser.parse_args() layout_dir = os.path.abspath(args.layout) output_path = os.path.abspath(args.output) work_dir = tempfile.mkdtemp(prefix="cnpg_rootfs_") try: for layer in _layout_layers(layout_dir): _extract_layer(layer, work_dir) os.makedirs(os.path.dirname(output_path), exist_ok=True) _write_tarball(work_dir, output_path) finally: shutil.rmtree(work_dir, ignore_errors=True) if __name__ == "__main__": main() ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td><details> <summary><strong>Generic: Robust Error Handling and Edge Case Management</strong></summary><br> **Objective:** Ensure comprehensive error handling that provides meaningful context and graceful <br>degradation<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1943/files#diff-d6c610461d90e52b8ca9b724649653fe763a52e75900e82e41e77a0a65ec7b77R121-R149'><strong>Limited edge handling</strong></a>: Several helpers raise generic ValueError without contextual logging and do not validate <br>all edge cases (e.g., ar header parsing, missing data.tar stream) which may hinder <br>actionable diagnostics in CI.<br> <details open><summary>Referred Code</summary> ```python def _apply_deb_package(dest: str, package_path: str) -> None: with open(package_path, "rb") as deb_file: header = deb_file.read(8) if header != b"!<arch>\n": raise ValueError(f"{package_path} is not an ar archive") while True: entry_header = deb_file.read(60) if not entry_header: break if len(entry_header) != 60: raise ValueError(f"Corrupt ar header in {package_path}") name = entry_header[:16].decode("utf-8").strip() size_str = entry_header[48:58].decode("utf-8").strip() try: size = int(size_str) except ValueError as exc: raise ValueError(f"Invalid size in {package_path}") from exc data = deb_file.read(size) if size % 2 == 1: deb_file.seek(1, os.SEEK_CUR) ... (clipped 8 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td><details> <summary><strong>Generic: Secure Error Handling</strong></summary><br> **Objective:** To prevent the leakage of sensitive system information through error messages while <br>providing sufficient detail for internal debugging.<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1943/files#diff-6c10e8eaf5d24bf3487572e0b9734f5f9547aace54cb34ce6db7aff93c276af0R158-R163'><strong>Verbose traceback</strong></a>: On failure the script prints full tracebacks to stderr which may expose internal paths; <br>consider gating detailed traces behind a debug flag while keeping user-facing messages <br>generic.<br> <details open><summary>Referred Code</summary> ```python try: main() except Exception as exc: # pragma: no cover - genrule helper traceback.print_exc() print(f"extract_rootfs: {exc}", file=sys.stderr) sys.exit(1) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td><details> <summary><strong>Generic: Security-First Input Validation and Data Handling</strong></summary><br> **Objective:** Ensure all data inputs are validated, sanitized, and handled securely to prevent <br>vulnerabilities<br> **Status:** <br><a href='https://github.com/carverauto/serviceradar/pull/1943/files#diff-1ca173bb2b8cb20ea64a4f1901850a229c0c9f40286f074e00b3301be99299bdR30-R76'><strong>Plaintext secret use</strong></a>: The template constructs a plaintext Postgres connection and relies on inline <br>username/password values, lacking enforcement of TLS or secret referencing best practices <br>for secure data handling.<br> <details open><summary>Referred Code</summary> ```yaml stringData: username: {{ default "spire" $pg.username }} password: {{ default "changeme" $pg.password | quote }} --- apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: {{ default "cnpg" $pg.clusterName }} namespace: {{ $ns }} spec: imageName: {{ $imageName }} instances: {{ default 3 $pg.instances }} {{- if $pg.imagePullSecrets }} imagePullSecrets: {{- range $secret := $pg.imagePullSecrets }} {{- if kindIs "string" $secret }} - name: {{ $secret }} {{- else }} - name: {{ $secret.name }} {{- end }} {{- end }} ... (clipped 26 lines) ``` </details> > Learn more about managing compliance <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#configuration-options'>generic rules</a> or creating your own <a href='https://qodo-merge-docs.qodo.ai/tools/compliance/#custom-compliance'>custom rules</a> </details></td></tr> <tr><td align="center" colspan="2"> <!-- placeholder --> <!-- /compliance --update_compliance=true --> </td></tr></tbody></table> </details>
qodo-code-review[bot] commented 2025-11-15 03:06:39 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/1943#issuecomment-3535481770
Original created: 2025-11-15T03:06:39Z

PR Code Suggestions

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Simplify the custom image build

Refactor the custom image build process by replacing the complex Bazel genrules
and shell scripts with a more maintainable multi-stage Dockerfile. This
encapsulates the build logic, improves readability, and utilizes Docker's
caching.

Examples:

docker/images/BUILD.bazel [1415-1541]
genrule(
    name = "timescaledb_extension_layer",
    srcs = [
        ":cnpg_postgresql_16_6_rootfs_tar",
        "@postgresql_server_dev_16_deb//file",
        "@debian_bison_amd64_deb//file",
        "@debian_flex_amd64_deb//file",
        "@debian_libpq_dev_amd64_deb//file",
        "//timescaledb:source_tree",
        "//docker/images:pg_config_wrapper.sh",

 ... (clipped 117 lines)

Solution Walkthrough:

Before:

# docker/images/BUILD.bazel
genrule(
    name = "timescaledb_extension_layer",
    srcs = [
        ":cnpg_postgresql_16_6_rootfs_tar",
        "@postgresql_server_dev_16_deb//file",
        # ... other debs and sources
    ],
    tools = [
        "//docker/images:extract_rootfs.py",
        "//docker/images:overlay_deb_packages.py",
    ],
    cmd = """
    # Extract base rootfs using custom python script
    python3 $(location //docker/images:extract_rootfs.py) ...
    # Overlay deb packages using another custom script
    python3 $(location //docker/images:overlay_deb_packages.py) ...
    # Set up chroot-like environment and build
    export CNPG_ROOT=...
    ./bootstrap ... && make install
    # Create tarball of installed files
    tar -cf $@ ...
    """,
)

After:

# docker/images/Dockerfile.cnpg-custom
ARG CNPG_BASE_IMAGE=ghcr.io/cloudnative-pg/postgresql:16.6

# Builder stage
FROM debian:bookworm as builder
RUN apt-get update && apt-get install -y build-essential ... postgresql-server-dev-16

# Build TimescaleDB
WORKDIR /build/timescaledb
RUN ... # download and build
RUN make install DESTDIR=/build/install

# Build AGE
WORKDIR /build/age
RUN ... # download and build
RUN make install DESTDIR=/build/install

# Final stage
FROM ${CNPG_BASE_IMAGE}
COPY --from=builder /build/install/usr/local/ /usr/local/

Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies that using complex genrules with multiple helper scripts to build the custom image is brittle and hard to maintain, and proposes a much more robust and standard multi-stage Dockerfile approach.

High
Possible issue
Improve hard link creation reliability

Refactor the hard link creation logic to use a retry loop, ensuring links are
created even if their targets appear later in the tar archive.

docker/images/extract_rootfs.py [140-150]

-for source_rel, dest_rel in pending_links:
-    source_path = os.path.join(dest, source_rel)
-    target_path = os.path.join(dest, dest_rel)
-    if not os.path.exists(source_path):
-        continue
-    parent_dir = os.path.dirname(target_path)
-    if parent_dir:
-        os.makedirs(parent_dir, exist_ok=True)
-    if os.path.lexists(target_path):
-        os.unlink(target_path)
-    os.link(source_path, target_path)
+retries = 5
+while pending_links and retries > 0:
+    unresolved_links = []
+    for source_rel, dest_rel in pending_links:
+        source_path = os.path.join(dest, source_rel)
+        target_path = os.path.join(dest, dest_rel)
+        if not os.path.exists(source_path):
+            unresolved_links.append((source_rel, dest_rel))
+            continue
+        parent_dir = os.path.dirname(target_path)
+        if parent_dir:
+            os.makedirs(parent_dir, exist_ok=True)
+        if os.path.lexists(target_path):
+            os.unlink(target_path)
+        os.link(source_path, target_path)
+    if len(unresolved_links) == len(pending_links):
+        # No progress made, break to avoid infinite loop
+        break
+    pending_links = unresolved_links
+    retries -= 1
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies that hard links may fail to be created if their targets appear later in the archive, and the proposed retry logic robustly fixes this potential extraction bug.

Medium
Fix broken symlink handling during cleanup

In _safe_rmtree, modify the _onerror handler to check for and unlink broken
symbolic links when a FileNotFoundError occurs to ensure proper cleanup.

docker/images/extract_rootfs.py [54-65]

 def _safe_rmtree(path: str) -> None:
     if not os.path.exists(path):
         return
 
     def _onerror(func, target, exc_info):
         _, error, _ = exc_info
         if isinstance(error, FileNotFoundError):
+            if os.path.islink(target):
+                os.unlink(target)
             return
         os.chmod(target, 0o700)
         func(target)
 
     shutil.rmtree(path, onerror=_onerror)
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies a potential FileNotFoundError when _safe_rmtree encounters a broken symbolic link and provides a robust fix, improving the script's reliability.

Low
  • More
Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/1943#issuecomment-3535481770 Original created: 2025-11-15T03:06:39Z --- ## PR Code Suggestions ✨ <!-- c848a4f --> Explore these optional code suggestions: <table><thead><tr><td><strong>Category</strong></td><td align=left><strong>Suggestion&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </strong></td><td align=center><strong>Impact</strong></td></tr><tbody><tr><td rowspan=1>High-level</td> <td> <details><summary>Simplify the custom image build</summary> ___ **Refactor the custom image build process by replacing the complex Bazel <code>genrule</code>s <br>and shell scripts with a more maintainable multi-stage Dockerfile. This <br>encapsulates the build logic, improves readability, and utilizes Docker's <br>caching.** ### Examples: <details> <summary> <a href="https://github.com/carverauto/serviceradar/pull/1943/files#diff-0e4db31c224a8f72ae8e870a849e38a59d74a2c7f7b04347b0b3eb07e20c5a80R1415-R1541">docker/images/BUILD.bazel [1415-1541]</a> </summary> ```bazel genrule( name = "timescaledb_extension_layer", srcs = [ ":cnpg_postgresql_16_6_rootfs_tar", "@postgresql_server_dev_16_deb//file", "@debian_bison_amd64_deb//file", "@debian_flex_amd64_deb//file", "@debian_libpq_dev_amd64_deb//file", "//timescaledb:source_tree", "//docker/images:pg_config_wrapper.sh", ... (clipped 117 lines) ``` </details> ### Solution Walkthrough: #### Before: ```bazel # docker/images/BUILD.bazel genrule( name = "timescaledb_extension_layer", srcs = [ ":cnpg_postgresql_16_6_rootfs_tar", "@postgresql_server_dev_16_deb//file", # ... other debs and sources ], tools = [ "//docker/images:extract_rootfs.py", "//docker/images:overlay_deb_packages.py", ], cmd = """ # Extract base rootfs using custom python script python3 $(location //docker/images:extract_rootfs.py) ... # Overlay deb packages using another custom script python3 $(location //docker/images:overlay_deb_packages.py) ... # Set up chroot-like environment and build export CNPG_ROOT=... ./bootstrap ... && make install # Create tarball of installed files tar -cf $@ ... """, ) ``` #### After: ```bazel # docker/images/Dockerfile.cnpg-custom ARG CNPG_BASE_IMAGE=ghcr.io/cloudnative-pg/postgresql:16.6 # Builder stage FROM debian:bookworm as builder RUN apt-get update && apt-get install -y build-essential ... postgresql-server-dev-16 # Build TimescaleDB WORKDIR /build/timescaledb RUN ... # download and build RUN make install DESTDIR=/build/install # Build AGE WORKDIR /build/age RUN ... # download and build RUN make install DESTDIR=/build/install # Final stage FROM ${CNPG_BASE_IMAGE} COPY --from=builder /build/install/usr/local/ /usr/local/ ``` <details><summary>Suggestion importance[1-10]: 9</summary> __ Why: The suggestion correctly identifies that using complex `genrule`s with multiple helper scripts to build the custom image is brittle and hard to maintain, and proposes a much more robust and standard multi-stage Dockerfile approach. </details></details></td><td align=center>High </td></tr><tr><td rowspan=2>Possible issue</td> <td> <details><summary>Improve hard link creation reliability</summary> ___ **Refactor the hard link creation logic to use a retry loop, ensuring links are <br>created even if their targets appear later in the tar archive.** [docker/images/extract_rootfs.py [140-150]](https://github.com/carverauto/serviceradar/pull/1943/files#diff-6c10e8eaf5d24bf3487572e0b9734f5f9547aace54cb34ce6db7aff93c276af0R140-R150) ```diff -for source_rel, dest_rel in pending_links: - source_path = os.path.join(dest, source_rel) - target_path = os.path.join(dest, dest_rel) - if not os.path.exists(source_path): - continue - parent_dir = os.path.dirname(target_path) - if parent_dir: - os.makedirs(parent_dir, exist_ok=True) - if os.path.lexists(target_path): - os.unlink(target_path) - os.link(source_path, target_path) +retries = 5 +while pending_links and retries > 0: + unresolved_links = [] + for source_rel, dest_rel in pending_links: + source_path = os.path.join(dest, source_rel) + target_path = os.path.join(dest, dest_rel) + if not os.path.exists(source_path): + unresolved_links.append((source_rel, dest_rel)) + continue + parent_dir = os.path.dirname(target_path) + if parent_dir: + os.makedirs(parent_dir, exist_ok=True) + if os.path.lexists(target_path): + os.unlink(target_path) + os.link(source_path, target_path) + if len(unresolved_links) == len(pending_links): + # No progress made, break to avoid infinite loop + break + pending_links = unresolved_links + retries -= 1 ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=1 --> <details><summary>Suggestion importance[1-10]: 7</summary> __ Why: The suggestion correctly identifies that hard links may fail to be created if their targets appear later in the archive, and the proposed retry logic robustly fixes this potential extraction bug. </details></details></td><td align=center>Medium </td></tr><tr><td> <details><summary>Fix broken symlink handling during cleanup</summary> ___ **In <code>_safe_rmtree</code>, modify the <code>_onerror</code> handler to check for and unlink broken <br>symbolic links when a <code>FileNotFoundError</code> occurs to ensure proper cleanup.** [docker/images/extract_rootfs.py [54-65]](https://github.com/carverauto/serviceradar/pull/1943/files#diff-6c10e8eaf5d24bf3487572e0b9734f5f9547aace54cb34ce6db7aff93c276af0R54-R65) ```diff def _safe_rmtree(path: str) -> None: if not os.path.exists(path): return def _onerror(func, target, exc_info): _, error, _ = exc_info if isinstance(error, FileNotFoundError): + if os.path.islink(target): + os.unlink(target) return os.chmod(target, 0o700) func(target) shutil.rmtree(path, onerror=_onerror) ``` - [ ] **Apply / Chat** <!-- /improve --apply_suggestion=2 --> <details><summary>Suggestion importance[1-10]: 6</summary> __ Why: The suggestion correctly identifies a potential `FileNotFoundError` when `_safe_rmtree` encounters a broken symbolic link and provides a robust fix, improving the script's reliability. </details></details></td><td align=center>Low </td></tr> <tr><td align="center" colspan="2"> - [ ] More <!-- /improve --more_suggestions=true --> </td><td></td></tr></tbody></table>
qodo-code-review[bot] commented 2025-11-15 03:06:44 +00:00 (Migrated from github.com)
Author
Owner

Imported GitHub PR comment.

Original author: @qodo-code-review[bot]
Original URL: https://github.com/carverauto/serviceradar/pull/1943#issuecomment-3535482211
Original created: 2025-11-15T03:06:44Z

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: test-go

Failed stage: Run Go Tests []

Failed test name: TestNetworkSweeper_WatchConfigWithInitialSignal/WatchConfig_with_initial_KV_config

Failure summary:

The GitHub Action failed because a Go test in the pkg/sweeper package failed:
- Test
TestNetworkSweeper_WatchConfigWithInitialSignal/WatchConfig_with_initial_KV_config timed out waiting
for a config ready signal.
- File and line: sweeper_test.go:282 reported "Timeout waiting for config
ready signal".
- Package result: FAIL github.com/carverauto/serviceradar/pkg/sweeper 0.642s,
causing the overall process to exit with code 1.

Relevant error logs:
1:  ##[group]Runner Image Provisioner
2:  Hosted Compute Agent
...

234:  ok  	github.com/carverauto/serviceradar/pkg/logger	1.441s	coverage: 0.3% of statements in ./...
235:  ok  	github.com/carverauto/serviceradar/pkg/mapper	1.610s	coverage: 1.7% of statements in ./...
236:  ok  	github.com/carverauto/serviceradar/pkg/mcp	1.360s	coverage: 0.2% of statements in ./...
237:  ok  	github.com/carverauto/serviceradar/pkg/metrics	1.597s	coverage: 0.2% of statements in ./...
238:  ok  	github.com/carverauto/serviceradar/pkg/metricstore	1.279s	coverage: 0.2% of statements in ./...
239:  ok  	github.com/carverauto/serviceradar/pkg/models	1.437s	coverage: 0.4% of statements in ./...
240:  github.com/carverauto/serviceradar/pkg/monitoring		coverage: 0.0% of statements
241:  ok  	github.com/carverauto/serviceradar/pkg/natsutil	1.367s	coverage: 0.2% of statements in ./...
242:  ok  	github.com/carverauto/serviceradar/pkg/poller	1.572s	coverage: 1.0% of statements in ./...
243:  ok  	github.com/carverauto/serviceradar/pkg/registry	2.175s	coverage: 4.7% of statements in ./...
244:  ok  	github.com/carverauto/serviceradar/pkg/scan	1.499s	coverage: 0.2% of statements in ./...
245:  github.com/carverauto/serviceradar/pkg/search		coverage: 0.0% of statements
246:  github.com/carverauto/serviceradar/pkg/spireadmin		coverage: 0.0% of statements
247:  github.com/carverauto/serviceradar/pkg/swagger		coverage: 0.0% of statements
248:  -test.shuffle 1763175990926504549
249:  --- FAIL: TestNetworkSweeper_WatchConfigWithInitialSignal (0.08s)
250:  --- FAIL: TestNetworkSweeper_WatchConfigWithInitialSignal/WatchConfig_with_initial_KV_config (0.08s)
251:  sweeper_test.go:282: Timeout waiting for config ready signal
252:  FAIL
253:  coverage: 2.6% of statements in ./...
254:  FAIL	github.com/carverauto/serviceradar/pkg/sweeper	0.642s
255:  ok  	github.com/carverauto/serviceradar/pkg/sync	2.242s	coverage: 1.5% of statements in ./...
256:  FAIL
257:  ##[error]Process completed with exit code 1.
258:  Post job cleanup.
259:  [command]/usr/bin/git version
260:  git version 2.51.2
261:  Temporarily overriding HOME='/home/runner/work/_temp/629a0b1f-5e0b-4013-a597-4ab04034949b' before making global git config changes
262:  Adding repository directory to the temporary git global config as a safe directory
263:  [command]/usr/bin/git config --global --add safe.directory /home/runner/work/serviceradar/serviceradar
264:  [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand
265:  [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :"
266:  fatal: No url found for submodule path 'timescaledb' in .gitmodules
267:  ##[warning]The process '/usr/bin/git' failed with exit code 128
268:  Cleaning up orphan processes

Imported GitHub PR comment. Original author: @qodo-code-review[bot] Original URL: https://github.com/carverauto/serviceradar/pull/1943#issuecomment-3535482211 Original created: 2025-11-15T03:06:44Z --- ## CI Feedback 🧐 A test triggered by this PR failed. Here is an AI-generated analysis of the failure: <table><tr><td> **Action:** test-go</td></tr> <tr><td> **Failed stage:** [Run Go Tests](https://github.com/carverauto/serviceradar/actions/runs/19383453361/job/55466371826) [❌] </td></tr> <tr><td> **Failed test name:** TestNetworkSweeper_WatchConfigWithInitialSignal/WatchConfig_with_initial_KV_config </td></tr> <tr><td> **Failure summary:** The GitHub Action failed because a Go test in the <code>pkg/sweeper</code> package failed:<br> - Test <br><code>TestNetworkSweeper_WatchConfigWithInitialSignal/WatchConfig_with_initial_KV_config</code> timed out waiting <br>for a config ready signal.<br> - File and line: <code>sweeper_test.go:282</code> reported "Timeout waiting for config <br>ready signal".<br> - Package result: <code>FAIL github.com/carverauto/serviceradar/pkg/sweeper 0.642s</code>, <br>causing the overall process to exit with code 1.<br> </td></tr> <tr><td> <details><summary>Relevant error logs:</summary> ```yaml 1: ##[group]Runner Image Provisioner 2: Hosted Compute Agent ... 234: ok github.com/carverauto/serviceradar/pkg/logger 1.441s coverage: 0.3% of statements in ./... 235: ok github.com/carverauto/serviceradar/pkg/mapper 1.610s coverage: 1.7% of statements in ./... 236: ok github.com/carverauto/serviceradar/pkg/mcp 1.360s coverage: 0.2% of statements in ./... 237: ok github.com/carverauto/serviceradar/pkg/metrics 1.597s coverage: 0.2% of statements in ./... 238: ok github.com/carverauto/serviceradar/pkg/metricstore 1.279s coverage: 0.2% of statements in ./... 239: ok github.com/carverauto/serviceradar/pkg/models 1.437s coverage: 0.4% of statements in ./... 240: github.com/carverauto/serviceradar/pkg/monitoring coverage: 0.0% of statements 241: ok github.com/carverauto/serviceradar/pkg/natsutil 1.367s coverage: 0.2% of statements in ./... 242: ok github.com/carverauto/serviceradar/pkg/poller 1.572s coverage: 1.0% of statements in ./... 243: ok github.com/carverauto/serviceradar/pkg/registry 2.175s coverage: 4.7% of statements in ./... 244: ok github.com/carverauto/serviceradar/pkg/scan 1.499s coverage: 0.2% of statements in ./... 245: github.com/carverauto/serviceradar/pkg/search coverage: 0.0% of statements 246: github.com/carverauto/serviceradar/pkg/spireadmin coverage: 0.0% of statements 247: github.com/carverauto/serviceradar/pkg/swagger coverage: 0.0% of statements 248: -test.shuffle 1763175990926504549 249: --- FAIL: TestNetworkSweeper_WatchConfigWithInitialSignal (0.08s) 250: --- FAIL: TestNetworkSweeper_WatchConfigWithInitialSignal/WatchConfig_with_initial_KV_config (0.08s) 251: sweeper_test.go:282: Timeout waiting for config ready signal 252: FAIL 253: coverage: 2.6% of statements in ./... 254: FAIL github.com/carverauto/serviceradar/pkg/sweeper 0.642s 255: ok github.com/carverauto/serviceradar/pkg/sync 2.242s coverage: 1.5% of statements in ./... 256: FAIL 257: ##[error]Process completed with exit code 1. 258: Post job cleanup. 259: [command]/usr/bin/git version 260: git version 2.51.2 261: Temporarily overriding HOME='/home/runner/work/_temp/629a0b1f-5e0b-4013-a597-4ab04034949b' before making global git config changes 262: Adding repository directory to the temporary git global config as a safe directory 263: [command]/usr/bin/git config --global --add safe.directory /home/runner/work/serviceradar/serviceradar 264: [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 265: [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 266: fatal: No url found for submodule path 'timescaledb' in .gitmodules 267: ##[warning]The process '/usr/bin/git' failed with exit code 128 268: Cleaning up orphan processes ``` </details></td></tr></table>
Sign in to join this conversation.
No reviewers
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
carverauto/serviceradar!2413
No description provided.