[
  {
    "path": ".dockerignore",
    "content": "bin/\n"
  },
  {
    "path": ".gitattributes",
    "content": "core.autocrlf false\n*.golden text eol=lf\n"
  },
  {
    "path": ".github/CODEOWNERS",
    "content": "# global rules\n* @docker/compose-maintainers"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.yml",
    "content": "name: 🐞 Bug\ndescription: File a bug/issue\ntitle: \"[BUG] <title>\"\nlabels: ['status/0-triage', 'kind/bug']\nbody:\n  - type: textarea\n    attributes:\n      label: Description\n      description: |\n        Briefly describe the problem you are having.\n\n        Include both the current behavior (what you are seeing) as well as what you expected to happen.\n    validations:\n      required: true\n  - type: markdown\n    attributes:\n      value: |\n        [Docker Swarm](https://www.mirantis.com/software/swarm/) uses a distinct compose file parser and \n        as such doesn't support some of the recent features of Docker Compose. Please contact Mirantis\n        if you need assistance with compose file support in Docker Swarm.\n  - type: textarea\n    attributes:\n      label: Steps To Reproduce\n      description: Steps to reproduce the behavior.\n      placeholder: |\n        1. In this environment...\n        2. With this config...\n        3. Run '...'\n        4. See error...\n    validations:\n      required: false\n  - type: textarea\n    attributes:\n      label: Compose Version\n      description: |\n        Paste output of `docker compose version` and `docker-compose version`.\n      render: Text\n    validations:\n      required: false\n  - type: textarea\n    attributes:\n      label: Docker Environment\n      description: Paste output of `docker info`.\n      render: Text\n    validations:\n      required: false\n  - type: textarea\n    attributes:\n      label: Anything else?\n      description: |\n        Links? References? Anything that will give us more context about the issue you are encountering!\n\n        Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.\n    validations:\n      required: false\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yml",
    "content": "blank_issues_enabled: true\ncontact_links:\n  - name: Docker Community Slack\n    url: https://dockr.ly/slack\n    about: 'Use the #docker-compose channel'\n  - name: Docker Support Forums\n    url: https://forums.docker.com/c/open-source-projects/compose/15\n    about: 'Use the \"Open Source Projects > Compose\" category'\n  - name: 'Ask on Stack Overflow'\n    url: https://stackoverflow.com/questions/tagged/docker-compose\n    about: 'Use the [docker-compose] tag when creating new questions'\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.yaml",
    "content": "name: Feature request\ndescription: Missing functionality? Come tell us about it!\nlabels:\n  - kind/feature\n  - status/0-triage\nbody:\n  - type: textarea\n    id: description\n    attributes:\n      label: Description\n      description: What is the feature you want to see?\n    validations:\n      required: true\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "content": "**What I did**\n\n**Related issue**\n<!-- If this is a bug fix, make sure your description includes \"fixes #xxxx\", or \"closes #xxxx\" -->\n\n**(not mandatory) A picture of a cute animal, if possible in relation to what you did**\n"
  },
  {
    "path": ".github/SECURITY.md",
    "content": "# Security Policy\n\nThe maintainers of Docker Compose take security seriously. If you discover\na security issue, please bring it to their attention right away!\n\n## Reporting a Vulnerability\n\nPlease **DO NOT** file a public issue, instead send your report privately\nto [security@docker.com](mailto:security@docker.com).\n\nReporter(s) can expect a response within 72 hours, acknowledging the issue was\nreceived.\n\n## Review Process\n\nAfter receiving the report, an initial triage and technical analysis is\nperformed to confirm the report and determine its scope. We may request\nadditional information in this stage of the process.\n\nOnce a reviewer has confirmed the relevance of the report, a draft security\nadvisory will be created on GitHub. The draft advisory will be used to discuss\nthe issue with maintainers, the reporter(s), and where applicable, other\naffected parties under embargo.\n\nIf the vulnerability is accepted, a timeline for developing a patch, public\ndisclosure, and patch release will be determined. If there is an embargo period\non public disclosure before the patch release, the reporter(s) are expected to\nparticipate in the discussion of the timeline and abide by agreed upon dates\nfor public disclosure.\n\n## Accreditation\n\nSecurity reports are greatly appreciated and we will publicly thank you,\nalthough we will keep your name confidential if you request it. We also like to\nsend gifts - if you're into swag, make sure to let us know. We do not currently\noffer a paid security bounty program at this time.\n\n## Supported Versions\n\nThis project docs not provide long-term supported versions, and only the current\nrelease and `main` branch are actively maintained. Docker Compose v1, and the\ncorresponding [v1 branch](https://github.com/docker/compose/tree/v1) reached\nEOL and are no longer supported. \n\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "version: 2\nupdates:\n  - package-ecosystem: gomod\n    directory: /\n    schedule:\n      interval: daily\n    ignore:\n      # docker + moby deps require coordination\n      - dependency-name: \"github.com/docker/buildx\"\n        # buildx is still 0.x\n        update-types: [\"version-update:semver-minor\"]\n      - dependency-name: \"github.com/moby/buildkit\"\n        # buildkit is still 0.x\n        update-types: [ \"version-update:semver-minor\" ]\n      - dependency-name: \"github.com/docker/cli\"\n        update-types: [\"version-update:semver-major\"]\n      - dependency-name: \"github.com/docker/docker\"\n        update-types: [\"version-update:semver-major\"]\n      - dependency-name: \"github.com/containerd/containerd\"\n        # containerd major/minor must be kept in sync with moby\n        update-types: [ \"version-update:semver-major\", \"version-update:semver-minor\" ]\n        # OTEL dependencies should be upgraded in sync with engine, cli, buildkit and buildx projects\n      - dependency-name: \"go.opentelemetry.io/*\"\n"
  },
  {
    "path": ".github/stale.yml",
    "content": "# Configuration for probot-stale - https://github.com/probot/stale\n\n# Number of days of inactivity before an Issue or Pull Request becomes stale\ndaysUntilStale: 90\n\n# Number of days of inactivity before an Issue or Pull Request with the stale label is closed.\n# Set to false to disable. If disabled, issues still need to be closed manually, but will remain marked as stale.\ndaysUntilClose: 7\n\n# Only issues or pull requests with all of these labels are check if stale. Defaults to `[]` (disabled)\nonlyLabels: []\n\n# Issues or Pull Requests with these labels will never be considered stale. Set to `[]` to disable\nexemptLabels:\n  - \"kind/feature\"\n\n# Set to true to ignore issues in a project (defaults to false)\nexemptProjects: false\n\n# Set to true to ignore issues in a milestone (defaults to false)\nexemptMilestones: false\n\n# Set to true to ignore issues with an assignee (defaults to false)\nexemptAssignees: true\n\n# Label to use when marking as stale\nstaleLabel: stale\n\n# Comment to post when marking as stale. Set to `false` to disable\nmarkComment: >\n  This issue has been automatically marked as stale because it has not had\n  recent activity. It will be closed if no further activity occurs. Thank you\n  for your contributions.\n\n# Comment to post when removing the stale label.\nunmarkComment: >\n  This issue has been automatically marked as not stale anymore due to the recent activity.\n\n# Comment to post when closing a stale Issue or Pull Request.\ncloseComment: >\n  This issue has been automatically closed because it had not recent activity during the stale period.\n\n# Limit the number of actions per hour, from 1-30. Default is 30\nlimitPerRun: 30\n\n# Limit to only `issues` or `pulls`\nonly: issues\n\n# Optionally, specify configuration settings that are specific to just 'issues' or 'pulls':\n# pulls:\n#   daysUntilStale: 30\n#   markComment: >\n#     This pull request has been automatically marked as stale because it has not had\n#     recent activity. It will be closed if no further activity occurs. Thank you\n#     for your contributions.\n\n# issues:\n#   exemptLabels:\n#     - confirmed"
  },
  {
    "path": ".github/workflows/ci.yml",
    "content": "name: ci\n\nconcurrency:\n  group: ${{ github.workflow }}-${{ github.ref }}\n  cancel-in-progress: true\n\non:\n  push:\n    branches:\n      - 'main'\n    tags:\n      - 'v*'\n  pull_request:\n  workflow_dispatch:\n    inputs:\n      debug_enabled:\n        description: 'To run with tmate enter \"debug_enabled\"'\n        required: false\n        default: \"false\"\n\npermissions:\n  contents: read # to fetch code (actions/checkout)\n\njobs:\n  validate:\n    runs-on: ubuntu-latest\n    strategy:\n      fail-fast: false\n      matrix:\n        target:\n          - lint\n          - validate-go-mod\n          - validate-headers\n          - validate-docs\n    steps:\n      -\n        name: Checkout\n        uses: actions/checkout@v4\n      -\n        name: Set up Docker Buildx\n        uses: docker/setup-buildx-action@v3\n      -\n        name: Run\n        run: |\n          make ${{ matrix.target }}\n\n  binary:\n    uses: docker/github-builder/.github/workflows/bake.yml@v1.4.0\n    permissions:\n      contents: read # same as global permission\n      id-token: write # for signing attestation(s) with GitHub OIDC Token\n    with:\n      runner: amd64\n      artifact-name: compose\n      artifact-upload: true\n      cache: true\n      cache-scope: binary\n      target: release\n      output: local\n      sbom: true\n      sign: ${{ github.event_name != 'pull_request' }}\n\n  binary-finalize:\n    runs-on: ubuntu-latest\n    needs:\n      - binary\n    steps:\n      -\n        name: Download artifacts\n        uses: actions/download-artifact@v7\n        with:\n          path: /tmp/compose-output\n          name: ${{ needs.binary.outputs.artifact-name }}\n      -\n        name: Rename provenance and sbom\n        run: |\n          for pdir in /tmp/compose-output/*/; do\n            (\n              cd \"$pdir\"\n              binname=$(find . -name 'docker-compose-*')\n              filename=$(basename \"${binname%.exe}\")\n              mv \"provenance.json\" \"${filename}.provenance.json\"\n              mv \"sbom-binary.spdx.json\" \"${filename}.sbom.json\"\n              find . -name 'sbom*.json' -exec rm {} \\;\n              if [ -f \"provenance.sigstore.json\" ]; then\n                mv \"provenance.sigstore.json\" \"${filename}.sigstore.json\"\n              fi\n            )\n          done\n          mkdir -p \"./bin/release\"\n          mv /tmp/compose-output/**/* \"./bin/release/\"\n      -\n        name: Create checksum file\n        working-directory: ./bin/release\n        run: |\n          find . -type f -print0 | sort -z | xargs -r0 shasum -a 256 -b | sed 's# \\*\\./# *#' > $RUNNER_TEMP/checksums.txt\n          shasum -a 256 -U -c $RUNNER_TEMP/checksums.txt\n          mv $RUNNER_TEMP/checksums.txt .\n          cat checksums.txt | while read sum file; do\n            if [[ \"${file#\\*}\" == docker-compose-* && \"${file#\\*}\" != *.provenance.json && \"${file#\\*}\" != *.sbom.json && \"${file#\\*}\" != *.sigstore.json ]]; then\n              echo \"$sum $file\" > ${file#\\*}.sha256\n            fi\n          done\n      -\n        name: Upload artifacts\n        uses: actions/upload-artifact@v6\n        with:\n          name: release\n          path: ./bin/release/*\n          if-no-files-found: error\n\n  bin-image-test:\n    if: github.event_name == 'pull_request'\n    uses: docker/github-builder/.github/workflows/bake.yml@v1.4.0\n    with:\n      runner: amd64\n      target: image-cross\n      cache: true\n      cache-scope: bin-image-test\n      output: image\n      push: false\n      sbom: true\n      set-meta-labels: true\n      meta-images: |\n        compose-bin\n      meta-tags: |\n        type=ref,event=pr\n      meta-bake-target: meta-helper\n\n  test:\n    runs-on: ubuntu-latest\n    steps:\n      -\n        name: Set up Docker Buildx\n        uses: docker/setup-buildx-action@v3\n      -\n        name: Test\n        uses: docker/bake-action@v6\n        with:\n          targets: test\n          set: |\n            *.cache-from=type=gha,scope=test\n            *.cache-to=type=gha,scope=test\n      -\n        name: Gather coverage data\n        uses: actions/upload-artifact@v4\n        with:\n          name: coverage-data-unit\n          path: bin/coverage/unit/\n          if-no-files-found: error\n      - \n        name: Unit Test Summary\n        uses: test-summary/action@v2\n        with:\n          paths: bin/coverage/unit/report.xml       \n        if: always()\n\n  e2e:\n    runs-on: ubuntu-latest\n    name: e2e (${{ matrix.mode }}, ${{ matrix.channel }})\n    strategy:\n      fail-fast: false\n      matrix:\n        include:\n          # current stable\n          - mode: plugin\n            engine: 29\n            channel: stable\n          - mode: standalone\n            engine: 29\n            channel: stable\n\n          # old stable (latest major - 1)\n          - mode: plugin\n            engine: 28\n            channel: oldstable\n          - mode: standalone\n            engine: 28\n            channel: oldstable\n    steps:\n      - name: Prepare\n        run: |\n          mode=${{ matrix.mode }}\n          engine=${{ matrix.engine }}\n          echo \"MODE_ENGINE_PAIR=${mode}-${engine}\" >> $GITHUB_ENV\n\n      - name: Checkout\n        uses: actions/checkout@v4\n\n      - name: Install Docker ${{ matrix.engine }}\n        run: |\n          sudo systemctl stop docker.service\n          sudo apt-get purge docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin\n          sudo apt-get install curl\n          curl -fsSL https://test.docker.com -o get-docker.sh\n          sudo sh ./get-docker.sh --version ${{ matrix.engine }}\n\n      - name: Check Docker Version\n        run: docker --version\n\n      - name: Set up Docker Buildx\n        uses: docker/setup-buildx-action@v3\n\n      - name: Set up Docker Model\n        run: |\n          sudo apt-get install docker-model-plugin\n          docker model version\n\n      - name: Set up Go\n        uses: actions/setup-go@v6\n        with:\n          go-version-file: '.go-version'\n          check-latest: true\n          cache: true\n\n      - name: Build example provider\n        run: make example-provider\n\n      - name: Build\n        uses: docker/bake-action@v6\n        with:\n          source: .\n          targets: binary-with-coverage\n          set: |\n            *.cache-from=type=gha,scope=binary-linux-amd64\n            *.cache-from=type=gha,scope=binary-e2e-${{ matrix.mode }}\n            *.cache-to=type=gha,scope=binary-e2e-${{ matrix.mode }},mode=max\n        env:\n          BUILD_TAGS: e2e\n\n      - name: Setup tmate session\n        if: ${{ github.event_name == 'workflow_dispatch' && github.event.inputs.debug_enabled }}\n        uses: mxschmitt/action-tmate@8b4e4ac71822ed7e0ad5fb3d1c33483e9e8fb270 # v3.11\n        with:\n          limit-access-to-actor: true\n          github-token: ${{ secrets.GITHUB_TOKEN }}\n\n      - name: Test plugin mode\n        if: ${{ matrix.mode == 'plugin' }}\n        run: |\n          rm -rf ./bin/coverage/e2e\n          mkdir -p ./bin/coverage/e2e\n          make e2e-compose GOCOVERDIR=bin/coverage/e2e TEST_FLAGS=\"-v\"\n\n      - name: Gather coverage data\n        if: ${{ matrix.mode == 'plugin' }}\n        uses: actions/upload-artifact@v4\n        with:\n          name: coverage-data-e2e-${{ env.MODE_ENGINE_PAIR }}\n          path: bin/coverage/e2e/\n          if-no-files-found: error\n\n      - name: Test standalone mode\n        if: ${{ matrix.mode == 'standalone' }}\n        run: |\n          rm -f /usr/local/bin/docker-compose\n          cp bin/build/docker-compose /usr/local/bin\n          make e2e-compose-standalone\n\n      - name: e2e Test Summary\n        uses: test-summary/action@v2\n        with:\n          paths: /tmp/report/report.xml       \n        if: always()\n\n  coverage:\n    runs-on: ubuntu-latest\n    needs:\n      - test\n      - e2e\n    steps:\n      # codecov won't process the report without the source code available\n      - name: Checkout\n        uses: actions/checkout@v4\n      - name: Set up Go\n        uses: actions/setup-go@v6\n        with:\n          go-version-file: '.go-version'\n          check-latest: true\n      - name: Download unit test coverage\n        uses: actions/download-artifact@v4\n        with:\n          name: coverage-data-unit\n          path: coverage/unit\n          merge-multiple: true\n      - name: Download E2E test coverage\n        uses: actions/download-artifact@v4\n        with:\n          pattern: coverage-data-e2e-*\n          path: coverage/e2e\n          merge-multiple: true\n      - name: Merge coverage reports\n        run: |\n          go tool covdata textfmt -i=./coverage/unit,./coverage/e2e -o ./coverage.txt\n      - name: Store coverage report in GitHub Actions\n        uses: actions/upload-artifact@v4\n        with:\n          name: go-covdata-txt\n          path: ./coverage.txt\n          if-no-files-found: error\n      - name: Upload coverage to Codecov\n        uses: codecov/codecov-action@v5\n        with:\n          files: ./coverage.txt\n\n  release:\n    permissions:\n      contents: write # to create a release (ncipollo/release-action)\n    runs-on: ubuntu-latest\n    needs:\n      - binary-finalize\n    steps:\n      -\n        name: Checkout\n        uses: actions/checkout@v4\n      -\n        name: Download artifacts\n        uses: actions/download-artifact@v7\n        with:\n          path: ./bin/release\n          name: release\n      -\n        name: List artifacts\n        run: |\n          tree -nh ./bin/release\n      -\n        name: Check artifacts\n        run: |\n          find bin/release -type f -exec file -e ascii -- {} +\n      -\n        name: GitHub Release\n        if: startsWith(github.ref, 'refs/tags/v')\n        uses: ncipollo/release-action@58ae73b360456532aafd58ee170c045abbeaee37 # v1.10.0\n        with:\n          artifacts: ./bin/release/*\n          generateReleaseNotes: true\n          draft: true\n          token: ${{ secrets.GITHUB_TOKEN }}\n"
  },
  {
    "path": ".github/workflows/docs-upstream.yml",
    "content": "# this workflow runs the remote validate bake target from docker/docs\n# to check if yaml reference docs used in this repo are valid\nname: docs-upstream\n\n# Default to 'contents: read', which grants actions to read commits.\n#\n# If any permission is set, any permission not included in the list is\n# implicitly set to \"none\".\n#\n# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions\npermissions:\n  contents: read\n\nconcurrency:\n  group: ${{ github.workflow }}-${{ github.ref }}\n  cancel-in-progress: true\n\non:\n  push:\n    branches:\n      - 'main'\n      - 'v[0-9]*'\n    paths:\n      - '.github/workflows/docs-upstream.yml'\n      - 'docs/**'\n  pull_request:\n    paths:\n      - '.github/workflows/docs-upstream.yml'\n      - 'docs/**'\n\njobs:\n  docs-yaml:\n    runs-on: ubuntu-latest\n    steps:\n      -\n        name: Checkout\n        uses: actions/checkout@v4\n      -\n        name: Upload reference YAML docs\n        uses: actions/upload-artifact@v4\n        with:\n          name: docs-yaml\n          path: docs/reference\n          retention-days: 1\n\n  validate:\n    uses: docker/docs/.github/workflows/validate-upstream.yml@main\n    needs:\n      - docs-yaml\n    with:\n      module-name: docker/compose\n"
  },
  {
    "path": ".github/workflows/merge.yml",
    "content": "name: merge\n\nconcurrency:\n  group: ${{ github.workflow }}-${{ github.ref }}\n  cancel-in-progress: true\n\non:\n  push:\n    branches:\n      - 'main'\n    tags:\n      - 'v*'\n\npermissions:\n  contents: read # to fetch code (actions/checkout)\n\nenv:\n  REPO_SLUG: \"docker/compose-bin\"\n\njobs:\n  e2e:\n    name: Build and test\n    runs-on: ${{ matrix.os }}\n    timeout-minutes: 15\n    strategy:\n      fail-fast: false\n      matrix:\n        os: [desktop-windows, desktop-macos, desktop-m1]\n        # mode: [plugin, standalone]\n        mode: [plugin]\n    env:\n      GO111MODULE: \"on\"\n    steps:\n      - uses: actions/checkout@v4\n\n      - uses: actions/setup-go@v6\n        with:\n          go-version-file: '.go-version'\n          cache: true\n          check-latest: true\n\n      - name: List Docker resources on machine\n        run: |\n          docker ps --all\n          docker volume ls\n          docker network ls\n          docker image ls\n      - name: Remove Docker resources on machine\n        continue-on-error: true\n        run: |\n          docker kill $(docker ps -q)\n          docker rm -f $(docker ps -aq)\n          docker volume rm -f $(docker volume ls -q)\n          docker ps --all\n\n      - name: Unit tests\n        run: make test\n\n      - name: Build binaries\n        run: |\n          make\n      - name: Check arch of go compose binary\n        run: |\n          file ./bin/build/docker-compose\n        if: ${{ !contains(matrix.os, 'desktop-windows') }}\n      -\n        name: Test plugin mode\n        if: ${{ matrix.mode == 'plugin' }}\n        run: |\n          make e2e-compose\n      -\n        name: Test standalone mode\n        if: ${{ matrix.mode == 'standalone' }}\n        run: |\n          make e2e-compose-standalone\n\n  bin-image-prepare:\n    runs-on: ubuntu-24.04\n    outputs:\n      repo-slug: ${{ env.REPO_SLUG }}\n    steps:\n      # FIXME: can't use env object in reusable workflow inputs: https://github.com/orgs/community/discussions/26671\n      - run: echo \"Exposing env vars for reusable workflow\"\n\n  bin-image:\n    uses: docker/github-builder/.github/workflows/bake.yml@v1.4.0\n    needs:\n      - bin-image-prepare\n    permissions:\n      contents: read # same as global permission\n      id-token: write # for signing attestation(s) with GitHub OIDC Token\n    with:\n      runner: amd64\n      target: image-cross\n      cache: true\n      cache-scope: bin-image\n      output: image\n      push: ${{ github.event_name != 'pull_request' }}\n      sbom: true\n      set-meta-labels: true\n      meta-images: |\n        ${{ needs.bin-image-prepare.outputs.repo-slug }}\n      meta-tags: |\n        type=ref,event=tag\n        type=edge\n      meta-bake-target: meta-helper\n    secrets:\n      registry-auths: |\n        - registry: docker.io\n          username: ${{ secrets.DOCKERPUBLICBOT_USERNAME }}\n          password: ${{ secrets.DOCKERPUBLICBOT_WRITE_PAT }}\n\n  desktop-edge-test:\n    runs-on: ubuntu-latest\n    needs: bin-image\n    steps:\n      -\n        name: Generate Token\n        id: generate_token\n        uses: actions/create-github-app-token@v1\n        with:\n          app-id: ${{ vars.DOCKERDESKTOP_APP_ID }}\n          private-key: ${{ secrets.DOCKERDESKTOP_APP_PRIVATEKEY }}\n          owner: docker\n          repositories: |\n            ${{ secrets.DOCKERDESKTOP_REPO }}\n      -\n        name: Trigger Docker Desktop e2e with edge version\n        uses: actions/github-script@v7\n        with:\n          github-token: ${{ steps.generate_token.outputs.token }}\n          script: |\n            await github.rest.actions.createWorkflowDispatch({\n              owner: 'docker',\n              repo: '${{ secrets.DOCKERDESKTOP_REPO }}',\n              workflow_id: 'compose-edge-integration.yml',\n              ref: 'main',\n              inputs: {\n                \"image-tag\": \"${{ env.REPO_SLUG }}:edge\"\n              }\n            })\n"
  },
  {
    "path": ".github/workflows/scorecards.yml",
    "content": "name: Scorecards supply-chain security\non:\n  # Only the default branch is supported.\n  branch_protection_rule:\n  schedule:\n    - cron: '44 9 * * 4'\n  push:\n    branches: [ \"main\" ]\n\njobs:\n  analysis:\n    name: Scorecards analysis\n    runs-on: ubuntu-latest\n    permissions:\n      # Needed to upload the results to code-scanning dashboard.\n      security-events: write\n      # Used to receive a badge.\n      id-token: write\n      # read permissions to all the other objects\n      actions: read\n      attestations: read\n      checks: read\n      contents: read\n      deployments: read\n      issues: read\n      discussions: read\n      packages: read\n      pages: read\n      pull-requests: read\n      statuses: read\n    \n    steps:\n      - name: \"Checkout code\"\n        uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # tag=v4.4.2\n        with:\n          persist-credentials: false\n\n      - name: \"Run analysis\"\n        uses: ossf/scorecard-action@62b2cac7ed8198b15735ed49ab1e5cf35480ba46 # tag=v2.4.0\n        with:\n          results_file: results.sarif\n          results_format: sarif\n\n          # Publish the results for public repositories to enable scorecard badges. For more details, see\n          # https://github.com/ossf/scorecard-action#publishing-results. \n          # For private repositories, `publish_results` will automatically be set to `false`, regardless \n          # of the value entered here.\n          publish_results: true\n\n      # Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF\n      # format to the repository Actions tab.\n      - name: \"Upload artifact\"\n        uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b # tag=v4.5.0\n        with:\n          name: SARIF file\n          path: results.sarif\n          retention-days: 5\n      \n      # Upload the results to GitHub's code scanning dashboard.\n      - name: \"Upload to code-scanning\"\n        uses: github/codeql-action/upload-sarif@3096afedf9873361b2b2f65e1445b13272c83eb8 # tag=v2.20.00\n        with:\n          sarif_file: results.sarif\n"
  },
  {
    "path": ".github/workflows/stale.yml",
    "content": "name: 'Close stale issues'\n\n# Default to 'contents: read', which grants actions to read commits.\n#\n# If any permission is set, any permission not included in the list is\n# implicitly set to \"none\".\n#\n# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions\npermissions:\n  contents: read\n\non:\n  schedule:\n    - cron: '0 0 * * 0,3' # at midnight UTC every Sunday and Wednesday\njobs:\n  stale:\n    runs-on: ubuntu-latest\n    permissions:\n      issues: write\n      pull-requests: write\n    steps:\n      - uses: actions/stale@v9\n        with:\n          repo-token: ${{ secrets.GITHUB_TOKEN }}\n          stale-issue-message: >\n            This issue has been automatically marked as stale because it has not had\n            recent activity. It will be closed if no further activity occurs. Thank you\n            for your contributions.\n          days-before-issue-stale: 150 # marks stale after 5 months\n          days-before-issue-close: 30 # closes 1 month after being marked with no action\n          stale-issue-label: \"stale\"\n          exempt-issue-labels: \"kind/feature,kind/enhancement\"\n          "
  },
  {
    "path": ".gitignore",
    "content": "bin/\n/.vscode/\ncoverage.out\ncovdatafiles/\n.DS_Store\npkg/e2e/*.tar\n"
  },
  {
    "path": ".go-version",
    "content": "1.25.8"
  },
  {
    "path": ".golangci.yml",
    "content": "version: \"2\"\nrun:\n  concurrency: 2\nlinters:\n  default: none\n  enable:\n    - copyloopvar\n    - depguard\n    - errcheck\n    - errorlint\n    - forbidigo\n    - gocritic\n    - gocyclo\n    - gomodguard\n    - govet\n    - ineffassign\n    - lll\n    - misspell\n    - nakedret\n    - nolintlint\n    - revive\n    - staticcheck\n    - testifylint\n    - unconvert\n    - unparam\n    - unused\n  settings:\n    depguard:\n      rules:\n        all:\n          deny:\n            - pkg: io/ioutil\n              desc: io/ioutil package has been deprecated\n            - pkg: github.com/docker/docker/errdefs\n              desc: use github.com/containerd/errdefs instead.\n            - pkg: golang.org/x/exp/maps\n              desc: use stdlib maps package\n            - pkg: golang.org/x/exp/slices\n              desc: use stdlib slices package\n            - pkg: gopkg.in/yaml.v2\n              desc: compose-go uses yaml.v3\n    forbidigo:\n      analyze-types: true\n      forbid:\n        - pattern: 'context\\.Background'\n          pkg: '^context$'\n          msg: \"in tests, use t.Context() instead of context.Background()\"\n        - pattern: 'context\\.TODO'\n          pkg: '^context$'\n          msg: \"in tests, use t.Context() instead of context.TODO()\"\n    gocritic:\n      disabled-checks:\n        - paramTypeCombine\n        - unnamedResult\n        - whyNoLint\n      enabled-tags:\n        - diagnostic\n        - opinionated\n        - style\n    gocyclo:\n      min-complexity: 16\n    gomodguard:\n      blocked:\n        modules:\n          - github.com/pkg/errors:\n              recommendations:\n                - errors\n                - fmt\n        versions:\n          - github.com/distribution/distribution:\n              reason: use distribution/reference\n          - gotest.tools:\n              version: < 3.0.0\n              reason: deprecated, pre-modules version\n    lll:\n      line-length: 200\n    revive:\n      rules:\n        - name: package-comments\n          disabled: true\n  exclusions:\n    generated: lax\n    paths:\n      - third_party$\n      - builtin$\n      - examples$\n    rules:\n      - path-except: '_test\\.go'\n        linters:\n          - forbidigo\nissues:\n  max-issues-per-linter: 0\n  max-same-issues: 0\nformatters:\n  enable:\n    - gci\n    - gofumpt\n  exclusions:\n    generated: lax\n    paths:\n      - third_party$\n      - builtin$\n      - examples$\n  settings:\n    gci:\n      sections:\n        - standard\n        - default\n        - localmodule\n      custom-order: true # make the section order the same as the order of \"sections\".\n"
  },
  {
    "path": "BUILDING.md",
    "content": "\n### Prerequisites\n\n* Windows:\n  * [Docker Desktop](https://docs.docker.com/desktop/setup/install/windows-install/)\n  * make\n  * go (see [go.mod](go.mod) for minimum version)\n* macOS:\n  * [Docker Desktop](https://docs.docker.com/desktop/setup/install/mac-install/)\n  * make\n  * go (see [go.mod](go.mod) for minimum version)\n* Linux:\n  * [Docker 20.10 or later](https://docs.docker.com/engine/install/)\n  * make\n  * go (see [go.mod](go.mod) for minimum version)\n\n### Building the CLI\n\nOnce you have the prerequisites installed, you can build the CLI using:\n\n```console\nmake\n```\n\nThis will output a `docker-compose` CLI plugin for your host machine in\n`./bin/build`.\n\nYou can statically cross compile the CLI for Windows, macOS, and Linux using the\n`cross` target.\n\n### Unit tests\n\nTo run all of the unit tests, run:\n\n```console\nmake test\n```\n\nIf you need to update a golden file simply do `go test ./... -test.update-golden`.\n\n### End-to-end tests\nTo run e2e tests, the Compose CLI binary needs to be built. All the commands to run e2e tests propose a version\nwith the prefix `build-and-e2e` to first build the CLI before executing tests.\n\nNote that this requires a local Docker Engine to be running.\n\n#### Whole end-to-end tests suite\n\nTo execute both CLI and standalone e2e tests, run :\n\n```console\nmake e2e\n```\n\nOr if you need to build the CLI, run:\n```console\nmake build-and-e2e\n```\n\n#### Plugin end-to-end tests suite\n\nTo execute CLI plugin e2e tests, run :\n\n```console\nmake e2e-compose\n```\n\nOr if you need to build the CLI, run:\n```console\nmake build-and-e2e-compose\n```\n\n#### Standalone end-to-end tests suite\n\nTo execute the standalone CLI e2e tests, run :\n\n```console\nmake e2e-compose-standalone\n```\n\nOr if you need to build the CLI, run:\n\n```console\nmake build-and-e2e-compose-standalone\n```\n\n## Releases\n\nTo create a new release:\n* Check that the CI is green on the main branch for the commit you want to release\n* Run the release GitHub Actions workflow with a tag of form vx.y.z following existing tags.\n\nThis will automatically create a new tag, release and make binaries for\nWindows, macOS, and Linux available for download on the\n[releases page](https://github.com/docker/compose/releases).\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing to Docker\n\nWant to hack on Docker? Awesome!  We have a contributor's guide that explains\n[setting up a Docker development environment and the contribution\nprocess](https://docs.docker.com/contribute/).\n\nThis page contains information about reporting issues as well as some tips and\nguidelines useful to experienced open source contributors. Finally, make sure\nyou read our [community guidelines](#docker-community-guidelines) before you\nstart participating.\n\n## Topics\n\n- [Contributing to Docker](#contributing-to-docker)\n  - [Topics](#topics)\n  - [Reporting security issues](#reporting-security-issues)\n  - [Reporting other issues](#reporting-other-issues)\n  - [Quick contribution tips and guidelines](#quick-contribution-tips-and-guidelines)\n    - [Pull requests are always welcome](#pull-requests-are-always-welcome)\n    - [Talking to other Docker users and contributors](#talking-to-other-docker-users-and-contributors)\n    - [Conventions](#conventions)\n    - [Merge approval](#merge-approval)\n    - [Sign your work](#sign-your-work)\n    - [How can I become a maintainer?](#how-can-i-become-a-maintainer)\n  - [Docker community guidelines](#docker-community-guidelines)\n  - [Coding Style](#coding-style)\n\n## Reporting security issues\n\nThe Docker maintainers take security seriously. If you discover a security\nissue, please bring it to their attention right away!\n\nPlease **DO NOT** file a public issue, instead, send your report privately to\n[security@docker.com](mailto:security@docker.com).\n\nSecurity reports are greatly appreciated and we will publicly thank you for them.\nWe also like to send gifts&mdash;if you're into Docker swag, make sure to let\nus know. We currently do not offer a paid security bounty program but are not\nruling it out in the future.\n\n\n## Reporting other issues\n\nA great way to contribute to the project is to send a detailed report when you\nencounter an issue. We always appreciate a well-written, thorough bug report,\nand will thank you for it!\n\nCheck that [our issue database](https://github.com/docker/compose/labels/Docker%20Compose%20V2)\ndoesn't already include that problem or suggestion before submitting an issue.\nIf you find a match, you can use the \"subscribe\" button to get notified of\nupdates. Do *not* leave random \"+1\" or \"I have this too\" comments, as they\nonly clutter the discussion, and don't help to resolve it. However, if you\nhave ways to reproduce the issue or have additional information that may help\nresolve the issue, please leave a comment.\n\nWhen reporting issues, always include:\n\n* The output of `docker version`.\n* The output of `docker context show`.\n* The output of `docker info`.\n\nAlso, include the steps required to reproduce the problem if possible and\napplicable. This information will help us review and fix your issue faster.\nWhen sending lengthy log files, consider posting them as a gist\n(https://gist.github.com).\nDon't forget to remove sensitive data from your log files before posting (you\ncan replace those parts with \"REDACTED\").\n\n_Note:_ \nMaintainers might request additional information to diagnose an issue,\nif initial reporter doesn't answer within a reasonable delay (a few weeks),\nissue will be closed.\n\n## Quick contribution tips and guidelines\n\nThis section gives the experienced contributor some tips and guidelines.\n\n### Pull requests are always welcome\n\nNot sure if that typo is worth a pull request? Found a bug and know how to fix\nit? Do it! We will appreciate it. Any significant change, like adding a backend,\nshould be documented as\n[a GitHub issue](https://github.com/docker/compose/issues)\nbefore anybody starts working on it.\n\nWe are always thrilled to receive pull requests. We do our best to process them\nquickly. If your pull request is not accepted on the first try,\ndon't get discouraged!\n\n### Talking to other Docker users and contributors\n\n<table class=\"tg\">\n  <col width=\"45%\">\n  <col width=\"65%\">\n  <tr>\n    <td>Community Slack</td>\n    <td>\n      The Docker Community has a dedicated Slack chat to discuss features and issues.  You can sign-up <a href=\"https://www.docker.com/community/\" target=\"_blank\">with this link</a>.\n    </td>\n  </tr>\n  <tr>\n    <td>Forums</td>\n    <td>\n      A public forum for users to discuss questions and explore current design patterns and\n      best practices about Docker and related projects in the Docker Ecosystem. To participate,\n      just log in with your Docker Hub account on <a href=\"https://forums.docker.com\" target=\"_blank\">https://forums.docker.com</a>.\n    </td>\n  </tr>\n  <tr>\n    <td>Twitter</td>\n    <td>\n      You can follow <a href=\"https://twitter.com/docker/\" target=\"_blank\">Docker's Twitter feed</a>\n      to get updates on our products. You can also tweet us questions or just\n      share blogs or stories.\n    </td>\n  </tr>\n  <tr>\n    <td>Stack Overflow</td>\n    <td>\n      Stack Overflow has over 17000 Docker questions listed. We regularly\n      monitor <a href=\"https://stackoverflow.com/questions/tagged/docker\" target=\"_blank\">Docker questions</a>\n      and so do many other knowledgeable Docker users.\n    </td>\n  </tr>\n</table>\n\n\n### Conventions\n\nFork the repository and make changes on your fork in a feature branch:\n\n- If it's a bug fix branch, name it XXXX-something where XXXX is the number of\n    the issue.\n- If it's a feature branch, create an enhancement issue to announce\n    your intentions, and name it XXXX-something where XXXX is the number of the\n    issue.\n\nSubmit unit tests for your changes. Go has a great test framework built in; use\nit! Take a look at existing tests for inspiration. Also, end-to-end tests are\navailable. Run the full test suite, both unit tests and e2e tests on your\nbranch before submitting a pull request. See [BUILDING.md](BUILDING.md) for\ninstructions to build and run tests.\n\nWrite clean code. Universally formatted code promotes ease of writing, reading,\nand maintenance. Always run `gofmt -s -w file.go` on each changed file before\ncommitting your changes. Most editors have plug-ins that do this automatically.\n\nPull request descriptions should be as clear as possible and include a reference\nto all the issues that they address.\n\nCommit messages must start with a capitalized and short summary (max. 50 chars)\nwritten in the imperative, followed by an optional, more detailed explanatory\ntext which is separated from the summary by an empty line.\n\nCode review comments may be added to your pull request. Discuss, then make the\nsuggested modifications and push additional commits to your feature branch. Post\na comment after pushing. New commits show up in the pull request automatically,\nbut the reviewers are notified only when you comment.\n\nPull requests must be cleanly rebased on top of the base branch without multiple branches\nmixed into the PR.\n\n**Git tip**: If your PR no longer merges cleanly, use `rebase master` in your\nfeature branch to update your pull request rather than `merge master`.\n\nBefore you make a pull request, squash your commits into logical units of work\nusing `git rebase -i` and `git push -f`. A logical unit of work is a consistent\nset of patches that should be reviewed together: for example, upgrading the\nversion of a vendored dependency and taking advantage of its now available new\nfeature constitute two separate units of work. Implementing a new function and\ncalling it in another file constitute a single logical unit of work. The very\nhigh majority of submissions should have a single commit, so if in doubt: squash\ndown to one.\n\nAfter every commit, make sure the test suite passes. Include documentation\nchanges in the same pull request so that a revert would remove all traces of\nthe feature or fix.\n\nInclude an issue reference like `Closes #XXXX` or `Fixes #XXXX` in the pull\nrequest description that closes an issue. Including references automatically\ncloses the issue on a merge.\n\nPlease do not add yourself to the `AUTHORS` file, as it is regenerated regularly\nfrom the Git history.\n\nPlease see the [Coding Style](#coding-style) for further guidelines.\n\n### Merge approval\n\nDocker maintainers use LGTM (Looks Good To Me) in comments on the code review to\nindicate acceptance.\n\nA change requires at least 2 LGTMs from the maintainers of each\ncomponent affected.\n\nFor more details, see the [MAINTAINERS](MAINTAINERS) page.\n\n### Sign your work\n\nThe sign-off is a simple line at the end of the explanation for the patch. Your\nsignature certifies that you wrote the patch or otherwise have the right to pass\nit on as an open-source patch. The rules are pretty simple: if you can certify\nthe below (from [developercertificate.org](https://developercertificate.org/)):\n\n```\nDeveloper Certificate of Origin\nVersion 1.1\n\nCopyright (C) 2004, 2006 The Linux Foundation and its contributors.\n660 York Street, Suite 102,\nSan Francisco, CA 94110 USA\n\nEveryone is permitted to copy and distribute verbatim copies of this\nlicense document, but changing it is not allowed.\n\nDeveloper's Certificate of Origin 1.1\n\nBy making a contribution to this project, I certify that:\n\n(a) The contribution was created in whole or in part by me and I\n    have the right to submit it under the open source license\n    indicated in the file; or\n\n(b) The contribution is based upon previous work that, to the best\n    of my knowledge, is covered under an appropriate open source\n    license and I have the right under that license to submit that\n    work with modifications, whether created in whole or in part\n    by me, under the same open source license (unless I am\n    permitted to submit under a different license), as indicated\n    in the file; or\n\n(c) The contribution was provided directly to me by some other\n    person who certified (a), (b) or (c) and I have not modified\n    it.\n\n(d) I understand and agree that this project and the contribution\n    are public and that a record of the contribution (including all\n    personal information I submit with it, including my sign-off) is\n    maintained indefinitely and may be redistributed consistent with\n    this project or the open source license(s) involved.\n```\n\nThen you just add a line to every git commit message:\n\n    Signed-off-by: Joe Smith <joe.smith@email.com>\n\nUse your real name (sorry, no pseudonyms or anonymous contributions.)\n\nIf you set your `user.name` and `user.email` git configs, you can sign your\ncommit automatically with `git commit -s`.\n\n### How can I become a maintainer?\n\nThe procedures for adding new maintainers are explained in the global\n[MAINTAINERS](https://github.com/docker/opensource/blob/main/MAINTAINERS)\nfile in the\n[https://github.com/docker/opensource/](https://github.com/docker/opensource/)\nrepository.\n\nDon't forget: being a maintainer is a time investment. Make sure you\nwill have time to make yourself available. You don't have to be a\nmaintainer to make a difference on the project!\n\n## Docker community guidelines\n\nWe want to keep the Docker community awesome, growing and collaborative. We need\nyour help to keep it that way. To help with this we've come up with some general\nguidelines for the community as a whole:\n\n* Be nice: Be courteous, respectful and polite to fellow community members:\n  no regional, racial, gender or other abuse will be tolerated. We like\n  nice people way better than mean ones!\n\n* Encourage diversity and participation: Make everyone in our community feel\n  welcome, regardless of their background and the extent of their\n  contributions, and do everything possible to encourage participation in\n  our community.\n\n* Keep it legal: Basically, don't get us in trouble. Share only content that\n  you own, do not share private or sensitive information, and don't break\n  the law.\n\n* Stay on topic: Make sure that you are posting to the correct channel and\n  avoid off-topic discussions. Remember when you update an issue or respond\n  to an email you are potentially sending it to a large number of people. Please\n  consider this before you update. Also, remember that nobody likes spam.\n\n* Don't send emails to the maintainers: There's no need to send emails to the\n  maintainers to ask them to investigate an issue or to take a look at a\n  pull request. Instead of sending an email, GitHub mentions should be\n  used to ping maintainers to review a pull request, a proposal or an\n  issue.\n\n## Coding Style\n\nUnless explicitly stated, we follow all coding guidelines from the Go\ncommunity. While some of these standards may seem arbitrary, they somehow seem\nto result in a solid, consistent codebase.\n\nIt is possible that the code base does not currently comply with these\nguidelines. We are not looking for a massive PR that fixes this, since that\ngoes against the spirit of the guidelines. All new contributors should make their\nbest effort to clean up and make the code base better than they left it.\nObviously, apply your best judgement. Remember, the goal here is to make the\ncode base easier for humans to navigate and understand. Always keep that in\nmind when nudging others to comply.\n\nThe rules:\n\n1. All code should be formatted with `gofmt -s`.\n2. All code should pass the default levels of\n   [`golint`](https://github.com/golang/lint).\n3. All code should follow the guidelines covered in [Effective\n   Go](https://go.dev/doc/effective_go) and [Go Code Review\n   Comments](https://go.dev/wiki/CodeReviewComments).\n4. Include code comments. Tell us the why, the history and the context.\n5. Document _all_ declarations and methods, even private ones. Declare\n   expectations, caveats and anything else that may be important. If a type\n   gets exported, having the comments already there will ensure it's ready.\n6. Variable name length should be proportional to its context and no longer.\n   `noCommaALongVariableNameLikeThisIsNotMoreClearWhenASimpleCommentWouldDo`.\n   In practice, short methods will have short variable names and globals will\n   have longer names.\n7. No underscores in package names. If you need a compound name, step back,\n   and re-examine why you need a compound name. If you still think you need a\n   compound name, lose the underscore.\n8. No utils or helpers packages. If a function is not general enough to\n   warrant its own package, it has not been written generally enough to be a\n   part of a util package. Just leave it unexported and well-documented.\n9. All tests should run with `go test` and outside tooling should not be\n   required. No, we don't need another unit testing framework. Assertion\n   packages are acceptable if they provide _real_ incremental value.\n10. Even though we call these \"rules\" above, they are actually just\n    guidelines. Since you've read all the rules, you now know that.\n\nIf you are having trouble getting into the mood of idiomatic Go, we recommend\nreading through [Effective Go](https://go.dev/doc/effective_go). The\n[Go Blog](https://go.dev/blog/) is also a great resource. Drinking the\nkool-aid is a lot easier than going thirsty.\n"
  },
  {
    "path": "Dockerfile",
    "content": "# syntax=docker/dockerfile:1\n\n\n#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nARG GO_VERSION=1.25.8\nARG XX_VERSION=1.9.0\nARG GOLANGCI_LINT_VERSION=v2.8.0\nARG ADDLICENSE_VERSION=v1.0.0\n\nARG BUILD_TAGS=\"e2e\"\nARG DOCS_FORMATS=\"md,yaml\"\nARG LICENSE_FILES=\".*\\(Dockerfile\\|Makefile\\|\\.go\\|\\.hcl\\|\\.sh\\)\"\n\n# xx is a helper for cross-compilation\nFROM --platform=${BUILDPLATFORM} tonistiigi/xx:${XX_VERSION} AS xx\n\n# osxcross contains the MacOSX cross toolchain for xx\nFROM crazymax/osxcross:15.5-alpine AS osxcross\n\nFROM golangci/golangci-lint:${GOLANGCI_LINT_VERSION}-alpine AS golangci-lint\nFROM ghcr.io/google/addlicense:${ADDLICENSE_VERSION} AS addlicense\n\nFROM --platform=${BUILDPLATFORM} golang:${GO_VERSION}-alpine3.22 AS base\nCOPY --from=xx / /\nRUN apk add --no-cache \\\n      clang \\\n      docker \\\n      file \\\n      findutils \\\n      git \\\n      make \\\n      protoc \\\n      protobuf-dev\nWORKDIR /src\nENV CGO_ENABLED=0\n\nFROM base AS build-base\nCOPY go.* .\nRUN --mount=type=cache,target=/go/pkg/mod \\\n    --mount=type=cache,target=/root/.cache/go-build \\\n    go mod download\n\nFROM build-base AS vendored\nRUN --mount=type=bind,target=.,rw \\\n    --mount=type=cache,target=/go/pkg/mod \\\n    go mod tidy && mkdir /out && cp go.mod go.sum /out\n\nFROM scratch AS vendor-update\nCOPY --from=vendored /out /\n\nFROM vendored AS vendor-validate\nRUN --mount=type=bind,target=.,rw <<EOT\n  set -e\n  git add -A\n  cp -rf /out/* .\n  diff=$(git status --porcelain -- go.mod go.sum)\n  if [ -n \"$diff\" ]; then\n    echo >&2 'ERROR: Vendor result differs. Please vendor your package with \"make go-mod-tidy\"'\n    echo \"$diff\"\n    exit 1\n  fi\nEOT\n\nFROM build-base AS build\nARG BUILD_TAGS\nARG BUILD_FLAGS\nARG TARGETPLATFORM\nRUN --mount=type=bind,target=. \\\n    --mount=type=cache,target=/root/.cache \\\n    --mount=type=cache,target=/go/pkg/mod \\\n    --mount=type=bind,from=osxcross,src=/osxsdk,target=/xx-sdk \\\n    xx-go --wrap && \\\n    if [ \"$(xx-info os)\" == \"darwin\" ]; then export CGO_ENABLED=1; export BUILD_TAGS=fsnotify,$BUILD_TAGS; fi && \\\n    make build GO_BUILDTAGS=\"$BUILD_TAGS\" DESTDIR=/out && \\\n    xx-verify --static /out/docker-compose\n\nFROM build-base AS lint\nARG BUILD_TAGS\nENV GOLANGCI_LINT_CACHE=/cache/golangci-lint\nRUN --mount=type=bind,target=. \\\n    --mount=type=cache,target=/root/.cache \\\n    --mount=type=cache,target=/go/pkg/mod \\\n    --mount=type=cache,target=/cache/golangci-lint \\\n    --mount=from=golangci-lint,source=/usr/bin/golangci-lint,target=/usr/bin/golangci-lint \\\n    golangci-lint cache status && \\\n    golangci-lint run --build-tags \"$BUILD_TAGS\" ./...\n\nFROM build-base AS test\nARG CGO_ENABLED=0\nARG BUILD_TAGS\nRUN --mount=type=bind,target=. \\\n    --mount=type=cache,target=/root/.cache \\\n    --mount=type=cache,target=/go/pkg/mod \\\n    rm -rf /tmp/coverage && \\\n    mkdir -p /tmp/coverage && \\\n    rm -rf /tmp/report && \\\n    mkdir -p /tmp/report && \\\n    go run gotest.tools/gotestsum@latest --format testname --junitfile \"/tmp/report/report.xml\" -- -tags \"$BUILD_TAGS\" -v -cover -covermode=atomic $(go list  $(TAGS) ./... | grep -vE 'e2e') -args -test.gocoverdir=\"/tmp/coverage\" && \\\n    go tool covdata percent -i=/tmp/coverage\n\nFROM scratch AS test-coverage\nCOPY --from=test --link /tmp/coverage /\nCOPY --from=test --link /tmp/report /\n\nFROM base AS license-set\nARG LICENSE_FILES\nRUN --mount=type=bind,target=.,rw \\\n    --mount=from=addlicense,source=/app/addlicense,target=/usr/bin/addlicense \\\n    find . -regex \"${LICENSE_FILES}\" | xargs addlicense -c 'Docker Compose CLI' -l apache && \\\n    mkdir /out && \\\n    find . -regex \"${LICENSE_FILES}\" | cpio -pdm /out\n\nFROM scratch AS license-update\nCOPY --from=set /out /\n\nFROM base AS license-validate\nARG LICENSE_FILES\nRUN --mount=type=bind,target=. \\\n    --mount=from=addlicense,source=/app/addlicense,target=/usr/bin/addlicense \\\n    find . -regex \"${LICENSE_FILES}\" | xargs addlicense -check -c 'Docker Compose CLI' -l apache -ignore validate -ignore testdata -ignore resolvepath -v\n\nFROM base AS docsgen\nWORKDIR /src\nRUN --mount=target=. \\\n    --mount=target=/root/.cache,type=cache \\\n    --mount=type=cache,target=/go/pkg/mod \\\n    go build -o /out/docsgen ./docs/yaml/main/generate.go\n\nFROM --platform=${BUILDPLATFORM} alpine AS docs-build\nRUN apk add --no-cache rsync git\nWORKDIR /src\nCOPY --from=docsgen /out/docsgen /usr/bin\nARG DOCS_FORMATS\nRUN --mount=target=/context \\\n    --mount=target=.,type=tmpfs <<EOT\n  set -e\n  rsync -a /context/. .\n  docsgen --formats \"$DOCS_FORMATS\" --source \"docs/reference\"\n  mkdir /out\n  cp -r docs/reference /out\nEOT\n\nFROM scratch AS docs-update\nCOPY --from=docs-build /out /out\n\nFROM docs-build AS docs-validate\nRUN --mount=target=/context \\\n    --mount=target=.,type=tmpfs <<EOT\n  set -e\n  rsync -a /context/. .\n  git add -A\n  rm -rf docs/reference/*\n  cp -rf /out/* ./docs/\n  if [ -n \"$(git status --porcelain -- docs/reference)\" ]; then\n    echo >&2 'ERROR: Docs result differs. Please update with \"make docs\"'\n    git status --porcelain -- docs/reference\n    exit 1\n  fi\nEOT\n\nFROM scratch AS binary-unix\nCOPY --link --from=build /out/docker-compose /\nFROM binary-unix AS binary-darwin\nFROM binary-unix AS binary-linux\nFROM scratch AS binary-windows\nCOPY --link --from=build /out/docker-compose /docker-compose.exe\nFROM binary-$TARGETOS AS binary\n# enable scanning for this stage\nARG BUILDKIT_SBOM_SCAN_STAGE=true\n\nFROM --platform=$BUILDPLATFORM alpine AS releaser\nWORKDIR /work\nARG TARGETOS\nARG TARGETARCH\nARG TARGETVARIANT\nRUN --mount=from=binary \\\n    mkdir -p /out && \\\n    # TODO: should just use standard arch\n    TARGETARCH=$([ \"$TARGETARCH\" = \"amd64\" ] && echo \"x86_64\" || echo \"$TARGETARCH\"); \\\n    TARGETARCH=$([ \"$TARGETARCH\" = \"arm64\" ] && echo \"aarch64\" || echo \"$TARGETARCH\"); \\\n    cp docker-compose* \"/out/docker-compose-${TARGETOS}-${TARGETARCH}${TARGETVARIANT}$(ls docker-compose* | sed -e 's/^docker-compose//')\"\n\nFROM scratch AS release\nCOPY --from=releaser /out/ /\n"
  },
  {
    "path": "LICENSE",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License."
  },
  {
    "path": "Makefile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nPKG := github.com/docker/compose/v5\nVERSION ?= $(shell git describe --match 'v[0-9]*' --dirty='.m' --always --tags)\n\nGO_LDFLAGS ?= -w -X ${PKG}/internal.Version=${VERSION}\nGO_BUILDTAGS ?= e2e\nDRIVE_PREFIX?=\nifeq ($(OS),Windows_NT)\n    DETECTED_OS = Windows\n    DRIVE_PREFIX=C:\nelse\n    DETECTED_OS = $(shell uname -s)\nendif\n\nifeq ($(DETECTED_OS),Windows)\n\tBINARY_EXT=.exe\nendif\n\nBUILD_FLAGS?=\nTEST_FLAGS?=\nE2E_TEST?=\nifneq ($(E2E_TEST),)\n\tTEST_FLAGS:=$(TEST_FLAGS) -run '$(E2E_TEST)'\nendif\n\nEXCLUDE_E2E_TESTS?=\nifneq ($(EXCLUDE_E2E_TESTS),)\n\tTEST_FLAGS:=$(TEST_FLAGS) -skip '$(EXCLUDE_E2E_TESTS)'\nendif\n\nBUILDX_CMD ?= docker buildx\n\n# DESTDIR overrides the output path for binaries and other artifacts\n# this is used by docker/docker-ce-packaging for the apt/rpm builds,\n# so it's important that the resulting binary ends up EXACTLY at the\n# path $DESTDIR/docker-compose when specified.\n#\n# See https://github.com/docker/docker-ce-packaging/blob/e43fbd37e48fde49d907b9195f23b13537521b94/rpm/SPECS/docker-compose-plugin.spec#L47\n#\n# By default, all artifacts go to subdirectories under ./bin/ in the\n# repo root, e.g. ./bin/build, ./bin/coverage, ./bin/release.\nDESTDIR ?=\n\nall: build\n\n.PHONY: build ## Build the compose cli-plugin\nbuild:\n\tGO111MODULE=on go build $(BUILD_FLAGS) -trimpath -tags \"$(GO_BUILDTAGS)\" -ldflags \"$(GO_LDFLAGS)\" -o \"$(or $(DESTDIR),./bin/build)/docker-compose$(BINARY_EXT)\" ./cmd\n\n.PHONY: binary\nbinary:\n\tBUILD_TAGS=\"$(GO_BUILDTAGS)\" $(BUILDX_CMD) bake binary\n\n.PHONY: binary-with-coverage\nbinary-with-coverage:\n\tBUILD_TAGS=\"$(GO_BUILDTAGS)\" $(BUILDX_CMD) bake binary-with-coverage\n\n.PHONY: install\ninstall: binary\n\tmkdir -p ~/.docker/cli-plugins\n\tinstall $(or $(DESTDIR),./bin/build)/docker-compose ~/.docker/cli-plugins/docker-compose\n\n.PHONY: e2e-compose\ne2e-compose: example-provider ## Run end to end local tests in plugin mode. Set E2E_TEST=TestName to run a single test\n\tgo run gotest.tools/gotestsum@latest --format testname --junitfile \"/tmp/report/report.xml\" -- -v $(TEST_FLAGS) -count=1 ./pkg/e2e\n\n.PHONY: e2e-compose-standalone\ne2e-compose-standalone: ## Run End to end local tests in standalone mode. Set E2E_TEST=TestName to run a single test\n\tgo run gotest.tools/gotestsum@latest --format testname --junitfile \"/tmp/report/report.xml\" -- $(TEST_FLAGS) -v -count=1 -parallel=1 --tags=standalone ./pkg/e2e\n\n.PHONY: build-and-e2e-compose\nbuild-and-e2e-compose: build e2e-compose ## Compile the compose cli-plugin and run end to end local tests in plugin mode. Set E2E_TEST=TestName to run a single test\n\n.PHONY: build-and-e2e-compose-standalone\nbuild-and-e2e-compose-standalone: build e2e-compose-standalone ## Compile the compose cli-plugin and run End to end local tests in standalone mode. Set E2E_TEST=TestName to run a single test\n\n.PHONY: example-provider\nexample-provider: ## build example provider for e2e tests\n\tgo build -o bin/build/example-provider docs/examples/provider.go\n\n.PHONY: mocks\nmocks:\n\tmockgen --version >/dev/null 2>&1 || go install go.uber.org/mock/mockgen@v0.4.0\n\tmockgen -destination pkg/mocks/mock_docker_cli.go -package mocks github.com/docker/cli/cli/command Cli\n\tmockgen -destination pkg/mocks/mock_docker_api.go -package mocks github.com/moby/moby/client APIClient\n\tmockgen -destination pkg/mocks/mock_docker_compose_api.go -package mocks -source=./pkg/api/api.go Service\n\n.PHONY: e2e\ne2e: e2e-compose e2e-compose-standalone ## Run end to end local tests in both modes. Set E2E_TEST=TestName to run a single test\n\n.PHONY: build-and-e2e\nbuild-and-e2e: build e2e-compose e2e-compose-standalone ## Compile the compose cli-plugin and run end to end local tests in both modes. Set E2E_TEST=TestName to run a single test\n\n.PHONY: cross\ncross: ## Compile the CLI for linux, darwin and windows\n\t$(BUILDX_CMD) bake binary-cross\n\n.PHONY: test\ntest: ## Run unit tests\n\t$(BUILDX_CMD) bake test\n\n.PHONY: cache-clear\ncache-clear: ## Clear the builder cache\n\t$(BUILDX_CMD) prune --force --filter type=exec.cachemount --filter=unused-for=24h\n\n.PHONY: lint\nlint: ## run linter(s)\n\t$(BUILDX_CMD) bake lint\n\n.PHONY: fmt\nfmt:\n\tgofumpt --version >/dev/null 2>&1 || go install mvdan.cc/gofumpt@latest\n\tgofumpt -w .\n\n.PHONY: docs\ndocs: ## generate documentation\n\t$(eval $@_TMP_OUT := $(shell mktemp -d -t compose-output.XXXXXXXXXX))\n\t$(BUILDX_CMD) bake --set \"*.output=type=local,dest=$($@_TMP_OUT)\" docs-update\n\trm -rf ./docs/internal\n\tcp -R \"$(DRIVE_PREFIX)$($@_TMP_OUT)\"/out/* ./docs/\n\trm -rf \"$(DRIVE_PREFIX)$($@_TMP_OUT)\"/*\n\n.PHONY: validate-docs\nvalidate-docs: ## validate the doc does not change\n\t$(BUILDX_CMD) bake docs-validate\n\n.PHONY: check-dependencies\ncheck-dependencies: ## check dependency updates\n\tgo list -u -m -f '{{if not .Indirect}}{{if .Update}}{{.}}{{end}}{{end}}' all\n\n.PHONY: validate-headers\nvalidate-headers: ## Check license header for all files\n\t$(BUILDX_CMD) bake license-validate\n\n.PHONY: go-mod-tidy\ngo-mod-tidy: ## Run go mod tidy in a container and output resulting go.mod and go.sum\n\t$(BUILDX_CMD) bake vendor-update\n\n.PHONY: validate-go-mod\nvalidate-go-mod: ## Validate go.mod and go.sum are up-to-date\n\t$(BUILDX_CMD) bake vendor-validate\n\nvalidate: validate-go-mod validate-headers validate-docs  ## Validate sources\n\npre-commit: validate check-dependencies lint build test e2e-compose\n\nhelp: ## Show help\n\t@echo Please specify a build target. The choices are:\n\t@grep -E '^[0-9a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = \":.*?## \"}; {printf \"\\033[36m%-30s\\033[0m %s\\n\", $$1, $$2}'\n"
  },
  {
    "path": "NOTICE",
    "content": "Docker Compose V2\nCopyright 2020 Docker Compose authors\n\nThis product includes software developed at Docker, Inc. (https://www.docker.com).\n"
  },
  {
    "path": "README.md",
    "content": "# Table of Contents\n- [Docker Compose](#docker-compose)\n- [Where to get Docker Compose](#where-to-get-docker-compose)\n    + [Windows and macOS](#windows-and-macos)\n    + [Linux](#linux)\n- [Quick Start](#quick-start)\n- [Contributing](#contributing)\n- [Legacy](#legacy)\n\n# Docker Compose\n\n[![GitHub release](https://img.shields.io/github/v/release/docker/compose.svg?style=flat-square)](https://github.com/docker/compose/releases/latest)\n[![PkgGoDev](https://img.shields.io/badge/go.dev-docs-007d9c?style=flat-square&logo=go&logoColor=white)](https://pkg.go.dev/github.com/docker/compose/v5)\n[![Build Status](https://img.shields.io/github/actions/workflow/status/docker/compose/ci.yml?label=ci&logo=github&style=flat-square)](https://github.com/docker/compose/actions?query=workflow%3Aci)\n[![Go Report Card](https://goreportcard.com/badge/github.com/docker/compose/v5?style=flat-square)](https://goreportcard.com/report/github.com/docker/compose/v5)\n[![Codecov](https://codecov.io/gh/docker/compose/branch/main/graph/badge.svg?token=HP3K4Y4ctu)](https://codecov.io/gh/docker/compose)\n[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/docker/compose/badge)](https://api.securityscorecards.dev/projects/github.com/docker/compose)\n![Docker Compose](logo.png?raw=true \"Docker Compose Logo\")\n\nDocker Compose is a tool for running multi-container applications on Docker\ndefined using the [Compose file format](https://compose-spec.io).\nA Compose file is used to define how one or more containers that make up\nyour application are configured.\nOnce you have a Compose file, you can create and start your application with a\nsingle command: `docker compose up`.\n\n> **Note**: About Docker Swarm\n> Docker Swarm used to rely on the legacy compose file format but did not adopt the compose specification\n> so is missing some of the recent enhancements in the compose syntax. After \n> [acquisition by Mirantis](https://www.mirantis.com/software/swarm/) swarm isn't maintained by Docker Inc, and\n> as such some Docker Compose features aren't accessible to swarm users.\n\n# Where to get Docker Compose\n\n### Windows and macOS\n\nDocker Compose is included in\n[Docker Desktop](https://www.docker.com/products/docker-desktop/)\nfor Windows and macOS.\n\n### Linux\n\nYou can download Docker Compose binaries from the\n[release page](https://github.com/docker/compose/releases) on this repository.\n\nRename the relevant binary for your OS to `docker-compose` and copy it to `$HOME/.docker/cli-plugins`\n\nOr copy it into one of these folders to install it system-wide:\n\n* `/usr/local/lib/docker/cli-plugins` OR `/usr/local/libexec/docker/cli-plugins`\n* `/usr/lib/docker/cli-plugins` OR `/usr/libexec/docker/cli-plugins`\n\n(might require making the downloaded file executable with `chmod +x`)\n\n\nQuick Start\n-----------\n\nUsing Docker Compose is a three-step process:\n1. Define your app's environment with a `Dockerfile` so it can be\n   reproduced anywhere.\n2. Define the services that make up your app in `compose.yaml` so\n   they can be run together in an isolated environment.\n3. Lastly, run `docker compose up` and Compose will start and run your entire\n   app.\n\nA Compose file looks like this:\n\n```yaml\nservices:\n  web:\n    build: .\n    ports:\n      - \"5000:5000\"\n    volumes:\n      - .:/code\n  redis:\n    image: redis\n```\n\nContributing\n------------\n\nWant to help develop Docker Compose? Check out our\n[contributing documentation](CONTRIBUTING.md).\n\nIf you find an issue, please report it on the\n[issue tracker](https://github.com/docker/compose/issues/new/choose).\n\nLegacy\n-------------\n\nThe Python version of Compose is available under the `v1` [branch](https://github.com/docker/compose/tree/v1).\n"
  },
  {
    "path": "cmd/cmdtrace/cmd_span.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage cmdtrace\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"sort\"\n\t\"strings\"\n\t\"time\"\n\n\tdockercli \"github.com/docker/cli/cli\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\tflag \"github.com/spf13/pflag\"\n\t\"go.opentelemetry.io/otel\"\n\t\"go.opentelemetry.io/otel/attribute\"\n\t\"go.opentelemetry.io/otel/codes\"\n\t\"go.opentelemetry.io/otel/trace\"\n\n\tcommands \"github.com/docker/compose/v5/cmd/compose\"\n\t\"github.com/docker/compose/v5/internal/tracing\"\n)\n\n// Setup should be called as part of the command's PersistentPreRunE\n// as soon as possible after initializing the dockerCli.\n//\n// It initializes the tracer for the CLI using both auto-detection\n// from the Docker context metadata as well as standard OTEL_ env\n// vars, creates a root span for the command, and wraps the actual\n// command invocation to ensure the span is properly finalized and\n// exported before exit.\nfunc Setup(cmd *cobra.Command, dockerCli command.Cli, args []string) error {\n\ttracingShutdown, err := tracing.InitTracing(dockerCli)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"initializing tracing: %w\", err)\n\t}\n\n\tctx := cmd.Context()\n\tctx, cmdSpan := otel.Tracer(\"\").Start(\n\t\tctx,\n\t\t\"cli/\"+strings.Join(commandName(cmd), \"-\"),\n\t)\n\tcmdSpan.SetAttributes(\n\t\tattribute.StringSlice(\"cli.flags\", getFlags(cmd.Flags())),\n\t\tattribute.Bool(\"cli.isatty\", dockerCli.In().IsTerminal()),\n\t)\n\n\tcmd.SetContext(ctx)\n\twrapRunE(cmd, cmdSpan, tracingShutdown)\n\treturn nil\n}\n\n// wrapRunE injects a wrapper function around the command's actual RunE (or Run)\n// method. This is necessary to capture the command result for reporting as well\n// as flushing any spans before exit.\n//\n// Unfortunately, PersistentPostRun(E) can't be used for this purpose because it\n// only runs if RunE does _not_ return an error, but this should run unconditionally.\nfunc wrapRunE(c *cobra.Command, cmdSpan trace.Span, tracingShutdown tracing.ShutdownFunc) {\n\torigRunE := c.RunE\n\tif origRunE == nil {\n\t\torigRun := c.Run\n\t\t//nolint:unparam // wrapper function for RunE, always returns nil by design\n\t\torigRunE = func(cmd *cobra.Command, args []string) error {\n\t\t\torigRun(cmd, args)\n\t\t\treturn nil\n\t\t}\n\t\tc.Run = nil\n\t}\n\n\tc.RunE = func(cmd *cobra.Command, args []string) error {\n\t\tcmdErr := origRunE(cmd, args)\n\t\tif cmdSpan != nil {\n\t\t\tif cmdErr != nil && !errors.Is(cmdErr, context.Canceled) {\n\t\t\t\t// default exit code is 1 if a more descriptive error\n\t\t\t\t// wasn't returned\n\t\t\t\texitCode := 1\n\t\t\t\tvar statusErr dockercli.StatusError\n\t\t\t\tif errors.As(cmdErr, &statusErr) {\n\t\t\t\t\texitCode = statusErr.StatusCode\n\t\t\t\t}\n\t\t\t\tcmdSpan.SetStatus(codes.Error, \"CLI command returned error\")\n\t\t\t\tcmdSpan.RecordError(cmdErr, trace.WithAttributes(\n\t\t\t\t\tattribute.Int(\"exit_code\", exitCode),\n\t\t\t\t))\n\n\t\t\t} else {\n\t\t\t\tcmdSpan.SetStatus(codes.Ok, \"\")\n\t\t\t}\n\t\t\tcmdSpan.End()\n\t\t}\n\t\tif tracingShutdown != nil {\n\t\t\t// use background for root context because the cmd's context might have\n\t\t\t// been canceled already\n\t\t\tctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)\n\t\t\tdefer cancel()\n\t\t\t// TODO(milas): add an env var to enable logging from the\n\t\t\t// OTel components for debugging purposes\n\t\t\t_ = tracingShutdown(ctx)\n\t\t}\n\t\treturn cmdErr\n\t}\n}\n\n// commandName returns the path components for a given command,\n// in reverse alphabetical order for consistent usage metrics.\n//\n// The root Compose command and anything before (i.e. \"docker\")\n// are not included.\n//\n// For example:\n//   - docker compose alpha watch -> [watch, alpha]\n//   - docker-compose up -> [up]\nfunc commandName(cmd *cobra.Command) []string {\n\tvar name []string\n\tfor c := cmd; c != nil; c = c.Parent() {\n\t\tif c.Name() == commands.PluginName {\n\t\t\tbreak\n\t\t}\n\t\tname = append(name, c.Name())\n\t}\n\tsort.Sort(sort.Reverse(sort.StringSlice(name)))\n\treturn name\n}\n\nfunc getFlags(fs *flag.FlagSet) []string {\n\tvar result []string\n\tfs.Visit(func(flag *flag.Flag) {\n\t\tresult = append(result, flag.Name)\n\t})\n\treturn result\n}\n"
  },
  {
    "path": "cmd/cmdtrace/cmd_span_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage cmdtrace\n\nimport (\n\t\"reflect\"\n\t\"testing\"\n\n\t\"github.com/spf13/cobra\"\n\tflag \"github.com/spf13/pflag\"\n\n\tcommands \"github.com/docker/compose/v5/cmd/compose\"\n)\n\nfunc TestGetFlags(t *testing.T) {\n\t// Initialize flagSet with flags\n\tfs := flag.NewFlagSet(\"up\", flag.ContinueOnError)\n\tvar (\n\t\tdetach  string\n\t\ttimeout string\n\t)\n\tfs.StringVar(&detach, \"detach\", \"d\", \"\")\n\tfs.StringVar(&timeout, \"timeout\", \"t\", \"\")\n\t_ = fs.Set(\"detach\", \"detach\")\n\t_ = fs.Set(\"timeout\", \"timeout\")\n\n\ttests := []struct {\n\t\tname     string\n\t\tinput    *flag.FlagSet\n\t\texpected []string\n\t}{\n\t\t{\n\t\t\tname:     \"NoFlags\",\n\t\t\tinput:    flag.NewFlagSet(\"NoFlags\", flag.ContinueOnError),\n\t\t\texpected: nil,\n\t\t},\n\t\t{\n\t\t\tname:     \"Flags\",\n\t\t\tinput:    fs,\n\t\t\texpected: []string{\"detach\", \"timeout\"},\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tresult := getFlags(test.input)\n\t\t\tif !reflect.DeepEqual(result, test.expected) {\n\t\t\t\tt.Errorf(\"Expected %v, but got %v\", test.expected, result)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestCommandName(t *testing.T) {\n\ttests := []struct {\n\t\tname     string\n\t\tsetupCmd func() *cobra.Command\n\t\twant     []string\n\t}{\n\t\t{\n\t\t\tname: \"docker compose alpha watch -> [watch, alpha]\",\n\t\t\tsetupCmd: func() *cobra.Command {\n\t\t\t\tdockerCmd := &cobra.Command{Use: \"docker\"}\n\t\t\t\tcomposeCmd := &cobra.Command{Use: commands.PluginName}\n\t\t\t\talphaCmd := &cobra.Command{Use: \"alpha\"}\n\t\t\t\twatchCmd := &cobra.Command{Use: \"watch\"}\n\n\t\t\t\tdockerCmd.AddCommand(composeCmd)\n\t\t\t\tcomposeCmd.AddCommand(alphaCmd)\n\t\t\t\talphaCmd.AddCommand(watchCmd)\n\n\t\t\t\treturn watchCmd\n\t\t\t},\n\t\t\twant: []string{\"watch\", \"alpha\"},\n\t\t},\n\t\t{\n\t\t\tname: \"docker-compose up -> [up]\",\n\t\t\tsetupCmd: func() *cobra.Command {\n\t\t\t\tdockerComposeCmd := &cobra.Command{Use: commands.PluginName}\n\t\t\t\tupCmd := &cobra.Command{Use: \"up\"}\n\n\t\t\t\tdockerComposeCmd.AddCommand(upCmd)\n\n\t\t\t\treturn upCmd\n\t\t\t},\n\t\t\twant: []string{\"up\"},\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tcmd := tt.setupCmd()\n\t\t\tgot := commandName(cmd)\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"commandName() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "cmd/compatibility/convert.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compatibility\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"slices\"\n\t\"strings\"\n\n\t\"github.com/docker/compose/v5/cmd/compose\"\n)\n\nfunc getCompletionCommands() []string {\n\treturn []string{\n\t\t\"__complete\",\n\t\t\"__completeNoDesc\",\n\t}\n}\n\nfunc getBoolFlags() []string {\n\treturn []string{\n\t\t\"--debug\", \"-D\",\n\t\t\"--verbose\",\n\t\t\"--tls\",\n\t\t\"--tlsverify\",\n\t}\n}\n\nfunc getStringFlags() []string {\n\treturn []string{\n\t\t\"--tlscacert\",\n\t\t\"--tlscert\",\n\t\t\"--tlskey\",\n\t\t\"--host\", \"-H\",\n\t\t\"--context\",\n\t\t\"--log-level\",\n\t}\n}\n\n// Convert transforms standalone docker-compose args into CLI plugin compliant ones\nfunc Convert(args []string) []string {\n\tvar rootFlags []string\n\tcommand := []string{compose.PluginName}\n\tl := len(args)\nARGS:\n\tfor i := 0; i < l; i++ {\n\t\targ := args[i]\n\t\tif slices.Contains(getCompletionCommands(), arg) {\n\t\t\tcommand = append([]string{arg}, command...)\n\t\t\tcontinue\n\t\t}\n\t\tif arg != \"\" && arg[0] != '-' {\n\t\t\tcommand = append(command, args[i:]...)\n\t\t\tbreak\n\t\t}\n\n\t\tswitch arg {\n\t\tcase \"--verbose\":\n\t\t\targ = \"--debug\"\n\t\tcase \"-h\":\n\t\t\t// docker cli has deprecated -h to avoid ambiguity with -H, while docker-compose still support it\n\t\t\targ = \"--help\"\n\t\tcase \"--version\", \"-v\":\n\t\t\t// redirect --version pseudo-command to actual command\n\t\t\targ = \"version\"\n\t\t}\n\n\t\tif slices.Contains(getBoolFlags(), arg) {\n\t\t\trootFlags = append(rootFlags, arg)\n\t\t\tcontinue\n\t\t}\n\t\tfor _, flag := range getStringFlags() {\n\t\t\tif arg == flag {\n\t\t\t\ti++\n\t\t\t\tif i >= l {\n\t\t\t\t\tfmt.Fprintf(os.Stderr, \"flag needs an argument: '%s'\\n\", arg)\n\t\t\t\t\tos.Exit(1)\n\t\t\t\t}\n\t\t\t\trootFlags = append(rootFlags, arg, args[i])\n\t\t\t\tcontinue ARGS\n\t\t\t}\n\t\t\tif strings.HasPrefix(arg, flag) {\n\t\t\t\t_, val, found := strings.Cut(arg, \"=\")\n\t\t\t\tif found {\n\t\t\t\t\trootFlags = append(rootFlags, flag, val)\n\t\t\t\t\tcontinue ARGS\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tcommand = append(command, arg)\n\t}\n\treturn append(rootFlags, command...)\n}\n"
  },
  {
    "path": "cmd/compatibility/convert_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compatibility\n\nimport (\n\t\"errors\"\n\t\"os\"\n\t\"os/exec\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc Test_convert(t *testing.T) {\n\ttests := []struct {\n\t\tname    string\n\t\targs    []string\n\t\twant    []string\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"compose only\",\n\t\t\targs: []string{\"up\"},\n\t\t\twant: []string{\"compose\", \"up\"},\n\t\t},\n\t\t{\n\t\t\tname: \"with context\",\n\t\t\targs: []string{\"--context\", \"foo\", \"-f\", \"compose.yaml\", \"up\"},\n\t\t\twant: []string{\"--context\", \"foo\", \"compose\", \"-f\", \"compose.yaml\", \"up\"},\n\t\t},\n\t\t{\n\t\t\tname: \"with context arg\",\n\t\t\targs: []string{\"--context=foo\", \"-f\", \"compose.yaml\", \"up\"},\n\t\t\twant: []string{\"--context\", \"foo\", \"compose\", \"-f\", \"compose.yaml\", \"up\"},\n\t\t},\n\t\t{\n\t\t\tname: \"with host\",\n\t\t\targs: []string{\"--host\", \"tcp://1.2.3.4\", \"up\"},\n\t\t\twant: []string{\"--host\", \"tcp://1.2.3.4\", \"compose\", \"up\"},\n\t\t},\n\t\t{\n\t\t\tname: \"compose --verbose\",\n\t\t\targs: []string{\"--verbose\"},\n\t\t\twant: []string{\"--debug\", \"compose\"},\n\t\t},\n\t\t{\n\t\t\tname: \"compose --version\",\n\t\t\targs: []string{\"--version\"},\n\t\t\twant: []string{\"compose\", \"version\"},\n\t\t},\n\t\t{\n\t\t\tname: \"compose -v\",\n\t\t\targs: []string{\"-v\"},\n\t\t\twant: []string{\"compose\", \"version\"},\n\t\t},\n\t\t{\n\t\t\tname: \"help\",\n\t\t\targs: []string{\"-h\"},\n\t\t\twant: []string{\"compose\", \"--help\"},\n\t\t},\n\t\t{\n\t\t\tname: \"issues/1962\",\n\t\t\targs: []string{\"psql\", \"-h\", \"postgres\"},\n\t\t\twant: []string{\"compose\", \"psql\", \"-h\", \"postgres\"}, // -h should not be converted to --help\n\t\t},\n\t\t{\n\t\t\tname: \"issues/8648\",\n\t\t\targs: []string{\"exec\", \"mongo\", \"mongo\", \"--host\", \"mongo\"},\n\t\t\twant: []string{\"compose\", \"exec\", \"mongo\", \"mongo\", \"--host\", \"mongo\"}, // --host is passed to exec\n\t\t},\n\t\t{\n\t\t\tname: \"issues/12\",\n\t\t\targs: []string{\"--log-level\", \"INFO\", \"up\"},\n\t\t\twant: []string{\"--log-level\", \"INFO\", \"compose\", \"up\"},\n\t\t},\n\t\t{\n\t\t\tname: \"empty string argument\",\n\t\t\targs: []string{\"--project-directory\", \"\", \"ps\"},\n\t\t\twant: []string{\"compose\", \"--project-directory\", \"\", \"ps\"},\n\t\t},\n\t\t{\n\t\t\tname: \"compose as project name\",\n\t\t\targs: []string{\"--project-name\", \"compose\", \"down\", \"--remove-orphans\"},\n\t\t\twant: []string{\"compose\", \"--project-name\", \"compose\", \"down\", \"--remove-orphans\"},\n\t\t},\n\t\t{\n\t\t\tname: \"completion command\",\n\t\t\targs: []string{\"__complete\", \"up\"},\n\t\t\twant: []string{\"__complete\", \"compose\", \"up\"},\n\t\t},\n\t\t{\n\t\t\tname:    \"string flag without argument\",\n\t\t\targs:    []string{\"--log-level\"},\n\t\t\twantErr: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif tt.wantErr {\n\t\t\t\tif os.Getenv(\"BE_CRASHER\") == \"1\" {\n\t\t\t\t\tConvert(tt.args)\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tcmd := exec.Command(os.Args[0], \"-test.run=^\"+t.Name()+\"$\")\n\t\t\t\tcmd.Env = append(os.Environ(), \"BE_CRASHER=1\")\n\t\t\t\terr := cmd.Run()\n\t\t\t\tvar e *exec.ExitError\n\t\t\t\tif errors.As(err, &e) && !e.Success() {\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tt.Fatalf(\"process ran with err %v, want exit status 1\", err)\n\t\t\t} else {\n\t\t\t\tgot := Convert(tt.args)\n\t\t\t\tassert.DeepEqual(t, tt.want, got)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "cmd/compose/alpha.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n       http://www.apache.org/licenses/LICENSE-2.0\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n)\n\n// alphaCommand groups all experimental subcommands\nfunc alphaCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\tcmd := &cobra.Command{\n\t\tShort:  \"Experimental commands\",\n\t\tUse:    \"alpha [COMMAND]\",\n\t\tHidden: true,\n\t\tAnnotations: map[string]string{\n\t\t\t\"experimentalCLI\": \"true\",\n\t\t},\n\t}\n\tcmd.AddCommand(\n\t\tvizCommand(p, dockerCli, backendOptions),\n\t\tpublishCommand(p, dockerCli, backendOptions),\n\t\tgenerateCommand(p, dockerCli, backendOptions),\n\t)\n\treturn cmd\n}\n"
  },
  {
    "path": "cmd/compose/attach.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype attachOpts struct {\n\t*composeOptions\n\n\tservice string\n\tindex   int\n\n\tdetachKeys string\n\tnoStdin    bool\n\tproxy      bool\n}\n\nfunc attachCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := attachOpts{\n\t\tcomposeOptions: &composeOptions{\n\t\t\tProjectOptions: p,\n\t\t},\n\t}\n\trunCmd := &cobra.Command{\n\t\tUse:   \"attach [OPTIONS] SERVICE\",\n\t\tShort: \"Attach local standard input, output, and error streams to a service's running container\",\n\t\tArgs:  cobra.MinimumNArgs(1),\n\t\tPreRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\topts.service = args[0]\n\t\t\treturn nil\n\t\t}),\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runAttach(ctx, dockerCli, backendOptions, opts)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\n\trunCmd.Flags().IntVar(&opts.index, \"index\", 0, \"index of the container if service has multiple replicas.\")\n\trunCmd.Flags().StringVarP(&opts.detachKeys, \"detach-keys\", \"\", \"\", \"Override the key sequence for detaching from a container.\")\n\n\trunCmd.Flags().BoolVar(&opts.noStdin, \"no-stdin\", false, \"Do not attach STDIN\")\n\trunCmd.Flags().BoolVar(&opts.proxy, \"sig-proxy\", true, \"Proxy all received signals to the process\")\n\treturn runCmd\n}\n\nfunc runAttach(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts attachOpts) error {\n\tprojectName, err := opts.toProjectName(ctx, dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tattachOpts := api.AttachOptions{\n\t\tService:    opts.service,\n\t\tIndex:      opts.index,\n\t\tDetachKeys: opts.detachKeys,\n\t\tNoStdin:    opts.noStdin,\n\t\tProxy:      opts.proxy,\n\t}\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn backend.Attach(ctx, projectName, attachOpts)\n}\n"
  },
  {
    "path": "cmd/compose/bridge.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\n\t\"github.com/distribution/reference\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/docker/go-units\"\n\t\"github.com/moby/moby/api/types/image\"\n\t\"github.com/moby/moby/client/pkg/stringid\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/cmd/formatter\"\n\t\"github.com/docker/compose/v5/pkg/bridge\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\nfunc bridgeCommand(p *ProjectOptions, dockerCli command.Cli) *cobra.Command {\n\tcmd := &cobra.Command{\n\t\tUse:              \"bridge CMD [OPTIONS]\",\n\t\tShort:            \"Convert compose files into another model\",\n\t\tTraverseChildren: true,\n\t}\n\tcmd.AddCommand(\n\t\tconvertCommand(p, dockerCli),\n\t\ttransformersCommand(dockerCli),\n\t)\n\treturn cmd\n}\n\nfunc convertCommand(p *ProjectOptions, dockerCli command.Cli) *cobra.Command {\n\tconvertOpts := bridge.ConvertOptions{}\n\tcmd := &cobra.Command{\n\t\tUse:   \"convert\",\n\t\tShort: \"Convert compose files to Kubernetes manifests, Helm charts, or another model\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runConvert(ctx, dockerCli, p, convertOpts)\n\t\t}),\n\t}\n\tflags := cmd.Flags()\n\tflags.StringVarP(&convertOpts.Output, \"output\", \"o\", \"out\", \"The output directory for the Kubernetes resources\")\n\tflags.StringArrayVarP(&convertOpts.Transformations, \"transformation\", \"t\", nil, \"Transformation to apply to compose model (default: docker/compose-bridge-kubernetes)\")\n\tflags.StringVar(&convertOpts.Templates, \"templates\", \"\", \"Directory containing transformation templates\")\n\treturn cmd\n}\n\nfunc runConvert(ctx context.Context, dockerCli command.Cli, p *ProjectOptions, opts bridge.ConvertOptions) error {\n\tbackend, err := compose.NewComposeService(dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, _, err := p.ToProject(ctx, dockerCli, backend, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn bridge.Convert(ctx, dockerCli, project, opts)\n}\n\nfunc transformersCommand(dockerCli command.Cli) *cobra.Command {\n\tcmd := &cobra.Command{\n\t\tUse:   \"transformations CMD [OPTIONS]\",\n\t\tShort: \"Manage transformation images\",\n\t}\n\tcmd.AddCommand(\n\t\tlistTransformersCommand(dockerCli),\n\t\tcreateTransformerCommand(dockerCli),\n\t)\n\treturn cmd\n}\n\nfunc listTransformersCommand(dockerCli command.Cli) *cobra.Command {\n\toptions := lsOptions{}\n\tcmd := &cobra.Command{\n\t\tUse:     \"list\",\n\t\tAliases: []string{\"ls\"},\n\t\tShort:   \"List available transformations\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\ttransformers, err := bridge.ListTransformers(ctx, dockerCli)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\treturn displayTransformer(dockerCli, transformers, options)\n\t\t}),\n\t}\n\tcmd.Flags().StringVar(&options.Format, \"format\", \"table\", \"Format the output. Values: [table | json]\")\n\tcmd.Flags().BoolVarP(&options.Quiet, \"quiet\", \"q\", false, \"Only display transformer names\")\n\treturn cmd\n}\n\nfunc displayTransformer(dockerCli command.Cli, transformers []image.Summary, options lsOptions) error {\n\tif options.Quiet {\n\t\tfor _, t := range transformers {\n\t\t\tif len(t.RepoTags) > 0 {\n\t\t\t\t_, _ = fmt.Fprintln(dockerCli.Out(), t.RepoTags[0])\n\t\t\t} else {\n\t\t\t\t_, _ = fmt.Fprintln(dockerCli.Out(), t.ID)\n\t\t\t}\n\t\t}\n\t\treturn nil\n\t}\n\treturn formatter.Print(transformers, options.Format, dockerCli.Out(),\n\t\tfunc(w io.Writer) {\n\t\t\tfor _, img := range transformers {\n\t\t\t\tid := stringid.TruncateID(img.ID)\n\t\t\t\tsize := units.HumanSizeWithPrecision(float64(img.Size), 3)\n\t\t\t\trepo, tag := \"<none>\", \"<none>\"\n\t\t\t\tif len(img.RepoTags) > 0 {\n\t\t\t\t\tref, err := reference.ParseDockerRef(img.RepoTags[0])\n\t\t\t\t\tif err == nil {\n\t\t\t\t\t\t// ParseDockerRef will reject a local image ID\n\t\t\t\t\t\trepo = reference.FamiliarName(ref)\n\t\t\t\t\t\tif tagged, ok := ref.(reference.Tagged); ok {\n\t\t\t\t\t\t\ttag = tagged.Tag()\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t_, _ = fmt.Fprintf(w, \"%s\\t%s\\t%s\\t%s\\n\", id, repo, tag, size)\n\t\t\t}\n\t\t},\n\t\t\"IMAGE ID\", \"REPO\", \"TAGS\", \"SIZE\")\n}\n\nfunc createTransformerCommand(dockerCli command.Cli) *cobra.Command {\n\tvar opts bridge.CreateTransformerOptions\n\tcmd := &cobra.Command{\n\t\tUse:   \"create [OPTION] PATH\",\n\t\tShort: \"Create a new transformation\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\topts.Dest = args[0]\n\t\t\treturn bridge.CreateTransformer(ctx, dockerCli, opts)\n\t\t}),\n\t}\n\tcmd.Flags().StringVarP(&opts.From, \"from\", \"f\", \"\", \"Existing transformation to copy (default: docker/compose-bridge-kubernetes)\")\n\treturn cmd\n}\n"
  },
  {
    "path": "cmd/compose/build.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/cli\"\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/command\"\n\tcliopts \"github.com/docker/cli/opts\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/cmd/display\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype buildOptions struct {\n\t*ProjectOptions\n\tquiet      bool\n\tpull       bool\n\tpush       bool\n\targs       []string\n\tnoCache    bool\n\tmemory     cliopts.MemBytes\n\tssh        string\n\tbuilder    string\n\tdeps       bool\n\tprint      bool\n\tcheck      bool\n\tsbom       string\n\tprovenance string\n}\n\nfunc (opts buildOptions) toAPIBuildOptions(services []string) (api.BuildOptions, error) {\n\tvar SSHKeys []types.SSHKey\n\tif opts.ssh != \"\" {\n\t\tid, path, found := strings.Cut(opts.ssh, \"=\")\n\t\tif !found && id != \"default\" {\n\t\t\treturn api.BuildOptions{}, fmt.Errorf(\"invalid ssh key %q\", opts.ssh)\n\t\t}\n\t\tSSHKeys = append(SSHKeys, types.SSHKey{\n\t\t\tID:   id,\n\t\t\tPath: path,\n\t\t})\n\t}\n\tbuilderName := opts.builder\n\tif builderName == \"\" {\n\t\tbuilderName = os.Getenv(\"BUILDX_BUILDER\")\n\t}\n\n\tuiMode := display.Mode\n\tif uiMode == display.ModeJSON {\n\t\tuiMode = \"rawjson\"\n\t}\n\n\treturn api.BuildOptions{\n\t\tPull:       opts.pull,\n\t\tPush:       opts.push,\n\t\tProgress:   uiMode,\n\t\tArgs:       types.NewMappingWithEquals(opts.args),\n\t\tNoCache:    opts.noCache,\n\t\tQuiet:      opts.quiet,\n\t\tServices:   services,\n\t\tDeps:       opts.deps,\n\t\tMemory:     int64(opts.memory),\n\t\tPrint:      opts.print,\n\t\tCheck:      opts.check,\n\t\tSSHs:       SSHKeys,\n\t\tBuilder:    builderName,\n\t\tSBOM:       opts.sbom,\n\t\tProvenance: opts.provenance,\n\t}, nil\n}\n\nfunc buildCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := buildOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"build [OPTIONS] [SERVICE...]\",\n\t\tShort: \"Build or rebuild services\",\n\t\tPreRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\tif opts.quiet {\n\t\t\t\tdisplay.Mode = display.ModeQuiet\n\t\t\t\tdevnull, err := os.Open(os.DevNull)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tos.Stdout = devnull\n\t\t\t}\n\t\t\treturn nil\n\t\t}),\n\t\tRunE: AdaptCmd(func(ctx context.Context, cmd *cobra.Command, args []string) error {\n\t\t\tif cmd.Flags().Changed(\"ssh\") && opts.ssh == \"\" {\n\t\t\t\topts.ssh = \"default\"\n\t\t\t}\n\t\t\tif cmd.Flags().Changed(\"progress\") && opts.ssh == \"\" {\n\t\t\t\tfmt.Fprint(os.Stderr, \"--progress is a global compose flag, better use `docker compose --progress xx build ...\\n\")\n\t\t\t}\n\t\t\treturn runBuild(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tflags := cmd.Flags()\n\tflags.BoolVar(&opts.push, \"push\", false, \"Push service images\")\n\tflags.BoolVarP(&opts.quiet, \"quiet\", \"q\", false, \"Suppress the build output\")\n\tflags.BoolVar(&opts.pull, \"pull\", false, \"Always attempt to pull a newer version of the image\")\n\tflags.StringArrayVar(&opts.args, \"build-arg\", []string{}, \"Set build-time variables for services\")\n\tflags.StringVar(&opts.ssh, \"ssh\", \"\", \"Set SSH authentications used when building service images. (use 'default' for using your default SSH Agent)\")\n\tflags.StringVar(&opts.builder, \"builder\", \"\", \"Set builder to use\")\n\tflags.BoolVar(&opts.deps, \"with-dependencies\", false, \"Also build dependencies (transitively)\")\n\tflags.StringVar(&opts.provenance, \"provenance\", \"\", `Add a provenance attestation`)\n\tflags.StringVar(&opts.sbom, \"sbom\", \"\", `Add a SBOM attestation`)\n\n\tflags.Bool(\"parallel\", true, \"Build images in parallel. DEPRECATED\")\n\tflags.MarkHidden(\"parallel\") //nolint:errcheck\n\tflags.Bool(\"compress\", true, \"Compress the build context using gzip. DEPRECATED\")\n\tflags.MarkHidden(\"compress\") //nolint:errcheck\n\tflags.Bool(\"force-rm\", true, \"Always remove intermediate containers. DEPRECATED\")\n\tflags.MarkHidden(\"force-rm\") //nolint:errcheck\n\tflags.BoolVar(&opts.noCache, \"no-cache\", false, \"Do not use cache when building the image\")\n\tflags.Bool(\"no-rm\", false, \"Do not remove intermediate containers after a successful build. DEPRECATED\")\n\tflags.MarkHidden(\"no-rm\") //nolint:errcheck\n\tflags.VarP(&opts.memory, \"memory\", \"m\", \"Set memory limit for the build container. Not supported by BuildKit.\")\n\tflags.StringVar(&p.Progress, \"progress\", \"\", fmt.Sprintf(`Set type of ui output (%s)`, strings.Join(printerModes, \", \")))\n\tflags.MarkHidden(\"progress\") //nolint:errcheck\n\tflags.BoolVar(&opts.print, \"print\", false, \"Print equivalent bake file\")\n\tflags.BoolVar(&opts.check, \"check\", false, \"Check build configuration\")\n\n\treturn cmd\n}\n\nfunc runBuild(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts buildOptions, services []string) error {\n\tif opts.print {\n\t\tbackendOptions.Add(compose.WithEventProcessor(display.Quiet()))\n\t}\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\topts.All = true // do not drop resources as build may involve some dependencies by additional_contexts\n\tproject, _, err := opts.ToProject(ctx, dockerCli, backend, nil, cli.WithoutEnvironmentResolution)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif err := applyPlatforms(project, false); err != nil {\n\t\treturn err\n\t}\n\n\tapiBuildOptions, err := opts.toAPIBuildOptions(services)\n\tif err != nil {\n\t\treturn err\n\t}\n\tapiBuildOptions.Attestations = true\n\n\treturn backend.Build(ctx, project, apiBuildOptions)\n}\n"
  },
  {
    "path": "cmd/compose/commit.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/docker/cli/opts\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype commitOptions struct {\n\t*ProjectOptions\n\n\tservice   string\n\treference string\n\n\tpause   bool\n\tcomment string\n\tauthor  string\n\tchanges opts.ListOpts\n\n\tindex int\n}\n\nfunc commitCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\toptions := commitOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"commit [OPTIONS] SERVICE [REPOSITORY[:TAG]]\",\n\t\tShort: \"Create a new image from a service container's changes\",\n\t\tArgs:  cobra.RangeArgs(1, 2),\n\t\tPreRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\toptions.service = args[0]\n\t\t\tif len(args) > 1 {\n\t\t\t\toptions.reference = args[1]\n\t\t\t}\n\n\t\t\treturn nil\n\t\t}),\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runCommit(ctx, dockerCli, backendOptions, options)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\n\tflags := cmd.Flags()\n\tflags.IntVar(&options.index, \"index\", 0, \"index of the container if service has multiple replicas.\")\n\n\tflags.BoolVarP(&options.pause, \"pause\", \"p\", true, \"Pause container during commit\")\n\tflags.StringVarP(&options.comment, \"message\", \"m\", \"\", \"Commit message\")\n\tflags.StringVarP(&options.author, \"author\", \"a\", \"\", `Author (e.g., \"John Hannibal Smith <hannibal@a-team.com>\")`)\n\toptions.changes = opts.NewListOpts(nil)\n\tflags.VarP(&options.changes, \"change\", \"c\", \"Apply Dockerfile instruction to the created image\")\n\n\treturn cmd\n}\n\nfunc runCommit(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, options commitOptions) error {\n\tprojectName, err := options.toProjectName(ctx, dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn backend.Commit(ctx, projectName, api.CommitOptions{\n\t\tService:   options.service,\n\t\tReference: options.reference,\n\t\tPause:     options.pause,\n\t\tComment:   options.comment,\n\t\tAuthor:    options.author,\n\t\tChanges:   options.changes,\n\t\tIndex:     options.index,\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/completion.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"sort\"\n\t\"strings\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\n// validArgsFn defines a completion func to be returned to fetch completion options\ntype validArgsFn func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective)\n\nfunc noCompletion() validArgsFn {\n\treturn func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {\n\t\treturn []string{}, cobra.ShellCompDirectiveNoSpace\n\t}\n}\n\nfunc completeServiceNames(dockerCli command.Cli, p *ProjectOptions) validArgsFn {\n\treturn func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {\n\t\tp.Offline = true\n\t\tbackend, err := compose.NewComposeService(dockerCli)\n\t\tif err != nil {\n\t\t\treturn nil, cobra.ShellCompDirectiveNoFileComp\n\t\t}\n\n\t\tproject, _, err := p.ToProject(cmd.Context(), dockerCli, backend, nil)\n\t\tif err != nil {\n\t\t\treturn nil, cobra.ShellCompDirectiveNoFileComp\n\t\t}\n\t\tvar values []string\n\t\tserviceNames := append(project.ServiceNames(), project.DisabledServiceNames()...)\n\t\tfor _, s := range serviceNames {\n\t\t\tif toComplete == \"\" || strings.HasPrefix(s, toComplete) {\n\t\t\t\tvalues = append(values, s)\n\t\t\t}\n\t\t}\n\t\treturn values, cobra.ShellCompDirectiveNoFileComp\n\t}\n}\n\nfunc completeProjectNames(dockerCli command.Cli, backendOptions *BackendOptions) func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {\n\treturn func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {\n\t\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\t\tif err != nil {\n\t\t\treturn nil, cobra.ShellCompDirectiveError\n\t\t}\n\n\t\tlist, err := backend.List(cmd.Context(), api.ListOptions{\n\t\t\tAll: true,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn nil, cobra.ShellCompDirectiveError\n\t\t}\n\t\tvar values []string\n\t\tfor _, stack := range list {\n\t\t\tif strings.HasPrefix(stack.Name, toComplete) {\n\t\t\t\tvalues = append(values, stack.Name)\n\t\t\t}\n\t\t}\n\t\treturn values, cobra.ShellCompDirectiveNoFileComp\n\t}\n}\n\nfunc completeProfileNames(dockerCli command.Cli, p *ProjectOptions) validArgsFn {\n\treturn func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {\n\t\tp.Offline = true\n\t\tbackend, err := compose.NewComposeService(dockerCli)\n\t\tif err != nil {\n\t\t\treturn nil, cobra.ShellCompDirectiveNoFileComp\n\t\t}\n\n\t\tproject, _, err := p.ToProject(cmd.Context(), dockerCli, backend, nil)\n\t\tif err != nil {\n\t\t\treturn nil, cobra.ShellCompDirectiveNoFileComp\n\t\t}\n\n\t\tallProfileNames := project.AllServices().GetProfiles()\n\t\tsort.Strings(allProfileNames)\n\n\t\tvar values []string\n\t\tfor _, profileName := range allProfileNames {\n\t\t\tif strings.HasPrefix(profileName, toComplete) {\n\t\t\t\tvalues = append(values, profileName)\n\t\t\t}\n\t\t}\n\t\treturn values, cobra.ShellCompDirectiveNoFileComp\n\t}\n}\n\nfunc completeScaleArgs(cli command.Cli, p *ProjectOptions) cobra.CompletionFunc {\n\treturn func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {\n\t\tcompletions, directive := completeServiceNames(cli, p)(cmd, args, toComplete)\n\t\tfor i, completion := range completions {\n\t\t\tcompletions[i] = completion + \"=\"\n\t\t}\n\t\treturn completions, directive\n\t}\n}\n"
  },
  {
    "path": "cmd/compose/compose.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"os/signal\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"syscall\"\n\n\t\"github.com/compose-spec/compose-go/v2/cli\"\n\t\"github.com/compose-spec/compose-go/v2/dotenv\"\n\t\"github.com/compose-spec/compose-go/v2/loader\"\n\tcomposepaths \"github.com/compose-spec/compose-go/v2/paths\"\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\tcomposegoutils \"github.com/compose-spec/compose-go/v2/utils\"\n\tdockercli \"github.com/docker/cli/cli\"\n\t\"github.com/docker/cli/cli-plugins/metadata\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/docker/cli/pkg/kvfile\"\n\t\"github.com/morikuni/aec\"\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/spf13/cobra\"\n\t\"github.com/spf13/pflag\"\n\n\t\"github.com/docker/compose/v5/cmd/display\"\n\t\"github.com/docker/compose/v5/cmd/formatter\"\n\t\"github.com/docker/compose/v5/internal/tracing\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n\t\"github.com/docker/compose/v5/pkg/remote\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\nconst (\n\t// ComposeParallelLimit set the limit running concurrent operation on docker engine\n\tComposeParallelLimit = \"COMPOSE_PARALLEL_LIMIT\"\n\t// ComposeProjectName define the project name to be used, instead of guessing from parent directory\n\tComposeProjectName = \"COMPOSE_PROJECT_NAME\"\n\t// ComposeCompatibility try to mimic compose v1 as much as possible\n\tComposeCompatibility = api.ComposeCompatibility\n\t// ComposeRemoveOrphans remove \"orphaned\" containers, i.e. containers tagged for current project but not declared as service\n\tComposeRemoveOrphans = \"COMPOSE_REMOVE_ORPHANS\"\n\t// ComposeIgnoreOrphans ignore \"orphaned\" containers\n\tComposeIgnoreOrphans = \"COMPOSE_IGNORE_ORPHANS\"\n\t// ComposeEnvFiles defines the env files to use if --env-file isn't used\n\tComposeEnvFiles = \"COMPOSE_ENV_FILES\"\n\t// ComposeMenu defines if the navigation menu should be rendered. Can be also set via --menu\n\tComposeMenu = \"COMPOSE_MENU\"\n\t// ComposeProgress defines type of progress output, if --progress isn't used\n\tComposeProgress = \"COMPOSE_PROGRESS\"\n)\n\n// rawEnv load a dot env file using docker/cli key=value parser, without attempt to interpolate or evaluate values\nfunc rawEnv(r io.Reader, filename string, vars map[string]string, lookup func(key string) (string, bool)) error {\n\tlines, err := kvfile.ParseFromReader(r, lookup)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"failed to parse env_file %s: %w\", filename, err)\n\t}\n\tfor _, line := range lines {\n\t\tkey, value, _ := strings.Cut(line, \"=\")\n\t\tvars[key] = value\n\t}\n\treturn nil\n}\n\nvar stdioToStdout bool\n\nfunc init() {\n\t// compose evaluates env file values for interpolation\n\t// `raw` format allows to load env_file with the same parser used by docker run --env-file\n\tdotenv.RegisterFormat(\"raw\", rawEnv)\n\n\tif v, ok := os.LookupEnv(\"COMPOSE_STATUS_STDOUT\"); ok {\n\t\tstdioToStdout, _ = strconv.ParseBool(v)\n\t}\n}\n\n// Command defines a compose CLI command as a func with args\ntype Command func(context.Context, []string) error\n\n// CobraCommand defines a cobra command function\ntype CobraCommand func(context.Context, *cobra.Command, []string) error\n\n// AdaptCmd adapt a CobraCommand func to cobra library\nfunc AdaptCmd(fn CobraCommand) func(cmd *cobra.Command, args []string) error {\n\treturn func(cmd *cobra.Command, args []string) error {\n\t\tctx, cancel := context.WithCancel(cmd.Context())\n\n\t\ts := make(chan os.Signal, 1)\n\t\tsignal.Notify(s, syscall.SIGTERM, syscall.SIGINT)\n\t\tgo func() {\n\t\t\t<-s\n\t\t\tcancel()\n\t\t\tsignal.Stop(s)\n\t\t\tclose(s)\n\t\t}()\n\n\t\terr := fn(ctx, cmd, args)\n\t\tif api.IsErrCanceled(err) || errors.Is(ctx.Err(), context.Canceled) {\n\t\t\terr = dockercli.StatusError{\n\t\t\t\tStatusCode: 130,\n\t\t\t}\n\t\t}\n\t\tif display.Mode == display.ModeJSON {\n\t\t\terr = makeJSONError(err)\n\t\t}\n\t\treturn err\n\t}\n}\n\n// Adapt a Command func to cobra library\nfunc Adapt(fn Command) func(cmd *cobra.Command, args []string) error {\n\treturn AdaptCmd(func(ctx context.Context, cmd *cobra.Command, args []string) error {\n\t\treturn fn(ctx, args)\n\t})\n}\n\ntype ProjectOptions struct {\n\tProjectName        string\n\tProfiles           []string\n\tConfigPaths        []string\n\tWorkDir            string\n\tProjectDir         string\n\tEnvFiles           []string\n\tCompatibility      bool\n\tProgress           string\n\tOffline            bool\n\tAll                bool\n\tinsecureRegistries []string\n}\n\n// ProjectFunc does stuff within a types.Project\ntype ProjectFunc func(ctx context.Context, project *types.Project) error\n\n// ProjectServicesFunc does stuff within a types.Project and a selection of services\ntype ProjectServicesFunc func(ctx context.Context, project *types.Project, services []string) error\n\n// WithProject creates a cobra run command from a ProjectFunc based on configured project options and selected services\nfunc (o *ProjectOptions) WithProject(fn ProjectFunc, dockerCli command.Cli) func(cmd *cobra.Command, args []string) error {\n\treturn o.WithServices(dockerCli, func(ctx context.Context, project *types.Project, services []string) error {\n\t\treturn fn(ctx, project)\n\t})\n}\n\n// WithServices creates a cobra run command from a ProjectFunc based on configured project options and selected services\nfunc (o *ProjectOptions) WithServices(dockerCli command.Cli, fn ProjectServicesFunc) func(cmd *cobra.Command, args []string) error {\n\treturn Adapt(func(ctx context.Context, services []string) error {\n\t\tbackend, err := compose.NewComposeService(dockerCli)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tproject, metrics, err := o.ToProject(ctx, dockerCli, backend, services, cli.WithoutEnvironmentResolution)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tctx = context.WithValue(ctx, tracing.MetricsKey{}, metrics)\n\n\t\tproject, err = project.WithServicesEnvironmentResolved(true)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn fn(ctx, project, services)\n\t})\n}\n\ntype jsonErrorData struct {\n\tError   bool   `json:\"error,omitempty\"`\n\tMessage string `json:\"message,omitempty\"`\n}\n\nfunc errorAsJSON(message string) string {\n\terrorMessage := &jsonErrorData{\n\t\tError:   true,\n\t\tMessage: message,\n\t}\n\tmarshal, err := json.Marshal(errorMessage)\n\tif err == nil {\n\t\treturn string(marshal)\n\t} else {\n\t\treturn message\n\t}\n}\n\nfunc makeJSONError(err error) error {\n\tif err == nil {\n\t\treturn nil\n\t}\n\tvar statusErr dockercli.StatusError\n\tif errors.As(err, &statusErr) {\n\t\treturn dockercli.StatusError{\n\t\t\tStatusCode: statusErr.StatusCode,\n\t\t\tStatus:     errorAsJSON(statusErr.Status),\n\t\t}\n\t}\n\treturn fmt.Errorf(\"%s\", errorAsJSON(err.Error()))\n}\n\nfunc (o *ProjectOptions) addProjectFlags(f *pflag.FlagSet) {\n\tf.StringArrayVar(&o.Profiles, \"profile\", []string{}, \"Specify a profile to enable\")\n\tf.StringVarP(&o.ProjectName, \"project-name\", \"p\", \"\", \"Project name\")\n\tf.StringArrayVarP(&o.ConfigPaths, \"file\", \"f\", []string{}, \"Compose configuration files\")\n\tf.StringArrayVar(&o.insecureRegistries, \"insecure-registry\", []string{}, \"Use insecure registry to pull Compose OCI artifacts. Doesn't apply to images\")\n\t_ = f.MarkHidden(\"insecure-registry\")\n\tf.StringArrayVar(&o.EnvFiles, \"env-file\", defaultStringArrayVar(ComposeEnvFiles), \"Specify an alternate environment file\")\n\tf.StringVar(&o.ProjectDir, \"project-directory\", \"\", \"Specify an alternate working directory\\n(default: the path of the, first specified, Compose file)\")\n\tf.StringVar(&o.WorkDir, \"workdir\", \"\", \"DEPRECATED! USE --project-directory INSTEAD.\\nSpecify an alternate working directory\\n(default: the path of the, first specified, Compose file)\")\n\tf.BoolVar(&o.Compatibility, \"compatibility\", false, \"Run compose in backward compatibility mode\")\n\tf.StringVar(&o.Progress, \"progress\", os.Getenv(ComposeProgress), fmt.Sprintf(`Set type of progress output (%s)`, strings.Join(printerModes, \", \")))\n\tf.BoolVar(&o.All, \"all-resources\", false, \"Include all resources, even those not used by services\")\n\t_ = f.MarkHidden(\"workdir\")\n}\n\n// get default value for a command line flag that is set by a coma-separated value in environment variable\nfunc defaultStringArrayVar(env string) []string {\n\treturn strings.FieldsFunc(os.Getenv(env), func(c rune) bool {\n\t\treturn c == ','\n\t})\n}\n\nfunc (o *ProjectOptions) projectOrName(ctx context.Context, dockerCli command.Cli, services ...string) (*types.Project, string, error) {\n\tname := o.ProjectName\n\tvar project *types.Project\n\tif len(o.ConfigPaths) > 0 || o.ProjectName == \"\" {\n\t\tbackend, err := compose.NewComposeService(dockerCli)\n\t\tif err != nil {\n\t\t\treturn nil, \"\", err\n\t\t}\n\n\t\tp, _, err := o.ToProject(ctx, dockerCli, backend, services, cli.WithDiscardEnvFile, cli.WithoutEnvironmentResolution)\n\t\tif err != nil {\n\t\t\tenvProjectName := os.Getenv(ComposeProjectName)\n\t\t\tif envProjectName != \"\" {\n\t\t\t\treturn nil, envProjectName, nil\n\t\t\t}\n\t\t\treturn nil, \"\", err\n\t\t}\n\t\tproject = p\n\t\tname = p.Name\n\t}\n\treturn project, name, nil\n}\n\nfunc (o *ProjectOptions) toProjectName(ctx context.Context, dockerCli command.Cli) (string, error) {\n\tif o.ProjectName != \"\" {\n\t\treturn o.ProjectName, nil\n\t}\n\n\tenvProjectName := os.Getenv(ComposeProjectName)\n\tif envProjectName != \"\" {\n\t\treturn envProjectName, nil\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tproject, _, err := o.ToProject(ctx, dockerCli, backend, nil)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn project.Name, nil\n}\n\nfunc (o *ProjectOptions) ToModel(ctx context.Context, dockerCli command.Cli, services []string, po ...cli.ProjectOptionsFn) (map[string]any, error) {\n\tremotes := o.remoteLoaders(dockerCli)\n\tfor _, r := range remotes {\n\t\tpo = append(po, cli.WithResourceLoader(r))\n\t}\n\n\toptions, err := o.toProjectOptions(po...)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif o.Compatibility || utils.StringToBool(options.Environment[ComposeCompatibility]) {\n\t\tapi.Separator = \"_\"\n\t}\n\n\treturn options.LoadModel(ctx)\n}\n\n// ToProject loads a Compose project using the LoadProject API.\n// Accepts optional cli.ProjectOptionsFn to control loader behavior.\nfunc (o *ProjectOptions) ToProject(ctx context.Context, dockerCli command.Cli, backend api.Compose, services []string, po ...cli.ProjectOptionsFn) (*types.Project, tracing.Metrics, error) {\n\tvar metrics tracing.Metrics\n\tremotes := o.remoteLoaders(dockerCli)\n\n\t// Setup metrics listener to collect project data\n\tmetricsListener := func(event string, metadata map[string]any) {\n\t\tswitch event {\n\t\tcase \"extends\":\n\t\t\tmetrics.CountExtends++\n\t\tcase \"include\":\n\t\t\tpaths := metadata[\"path\"].(types.StringList)\n\t\t\tfor _, path := range paths {\n\t\t\t\tvar isRemote bool\n\t\t\t\tfor _, r := range remotes {\n\t\t\t\t\tif r.Accept(path) {\n\t\t\t\t\t\tisRemote = true\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif isRemote {\n\t\t\t\t\tmetrics.CountIncludesRemote++\n\t\t\t\t} else {\n\t\t\t\t\tmetrics.CountIncludesLocal++\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tloadOpts := api.ProjectLoadOptions{\n\t\tProjectName:       o.ProjectName,\n\t\tConfigPaths:       o.ConfigPaths,\n\t\tWorkingDir:        o.ProjectDir,\n\t\tEnvFiles:          o.EnvFiles,\n\t\tProfiles:          o.Profiles,\n\t\tServices:          services,\n\t\tOffline:           o.Offline,\n\t\tAll:               o.All,\n\t\tCompatibility:     o.Compatibility,\n\t\tProjectOptionsFns: po,\n\t\tLoadListeners:     []api.LoadListener{metricsListener},\n\t\tOCI: api.OCIOptions{\n\t\t\tInsecureRegistries: o.insecureRegistries,\n\t\t},\n\t}\n\n\tproject, err := backend.LoadProject(ctx, loadOpts)\n\tif err != nil {\n\t\treturn nil, metrics, err\n\t}\n\n\treturn project, metrics, nil\n}\n\nfunc (o *ProjectOptions) remoteLoaders(dockerCli command.Cli) []loader.ResourceLoader {\n\tif o.Offline {\n\t\treturn nil\n\t}\n\tgit := remote.NewGitRemoteLoader(dockerCli, o.Offline)\n\toci := remote.NewOCIRemoteLoader(dockerCli, o.Offline, api.OCIOptions{})\n\treturn []loader.ResourceLoader{git, oci}\n}\n\nfunc (o *ProjectOptions) toProjectOptions(po ...cli.ProjectOptionsFn) (*cli.ProjectOptions, error) {\n\topts := []cli.ProjectOptionsFn{\n\t\tcli.WithWorkingDirectory(o.ProjectDir),\n\t\t// First apply os.Environment, always win\n\t\tcli.WithOsEnv,\n\t}\n\n\tif _, present := os.LookupEnv(\"PWD\"); !present {\n\t\tif pwd, err := os.Getwd(); err != nil {\n\t\t\treturn nil, err\n\t\t} else {\n\t\t\topts = append(opts, cli.WithEnv([]string{\"PWD=\" + pwd}))\n\t\t}\n\t}\n\n\topts = append(opts,\n\t\t// Load PWD/.env if present and no explicit --env-file has been set\n\t\tcli.WithEnvFiles(o.EnvFiles...),\n\t\t// read dot env file to populate project environment\n\t\tcli.WithDotEnv,\n\t\t// get compose file path set by COMPOSE_FILE\n\t\tcli.WithConfigFileEnv,\n\t\t// if none was selected, get default compose.yaml file from current dir or parent folder\n\t\tcli.WithDefaultConfigPath,\n\t\t// .. and then, a project directory != PWD maybe has been set so let's load .env file\n\t\tcli.WithEnvFiles(o.EnvFiles...), //nolint:gocritic // intentionally applying cli.WithEnvFiles twice.\n\t\tcli.WithDotEnv,                  //nolint:gocritic // intentionally applying cli.WithDotEnv twice.\n\t\t// eventually COMPOSE_PROFILES should have been set\n\t\tcli.WithDefaultProfiles(o.Profiles...),\n\t\tcli.WithName(o.ProjectName),\n\t)\n\n\treturn cli.NewProjectOptions(o.ConfigPaths, append(po, opts...)...)\n}\n\n// PluginName is the name of the plugin\nconst PluginName = \"compose\"\n\n// RunningAsStandalone detects when running as a standalone program\nfunc RunningAsStandalone() bool {\n\treturn len(os.Args) < 2 || os.Args[1] != metadata.MetadataSubcommandName && os.Args[1] != PluginName\n}\n\ntype BackendOptions struct {\n\tOptions []compose.Option\n}\n\nfunc (o *BackendOptions) Add(option compose.Option) {\n\to.Options = append(o.Options, option)\n}\n\n// RootCommand returns the compose command with its child commands\nfunc RootCommand(dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command { //nolint:gocyclo\n\topts := ProjectOptions{}\n\tvar (\n\t\tansi     string\n\t\tnoAnsi   bool\n\t\tverbose  bool\n\t\tversion  bool\n\t\tparallel int\n\t\tdryRun   bool\n\t)\n\tc := &cobra.Command{\n\t\tShort:            \"Docker Compose\",\n\t\tLong:             \"Define and run multi-container applications with Docker\",\n\t\tUse:              PluginName,\n\t\tTraverseChildren: true,\n\t\t// By default (no Run/RunE in parent c) for typos in subcommands, cobra displays the help of parent c but exit(0) !\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn cmd.Help()\n\t\t\t}\n\t\t\tif version {\n\t\t\t\treturn versionCommand(dockerCli).Execute()\n\t\t\t}\n\t\t\t_ = cmd.Help()\n\t\t\treturn dockercli.StatusError{\n\t\t\t\tStatusCode: 1,\n\t\t\t\tStatus:     fmt.Sprintf(\"unknown docker command: %q\", \"compose \"+args[0]),\n\t\t\t}\n\t\t},\n\t\tPersistentPreRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tparent := cmd.Root()\n\t\t\tif parent != nil {\n\t\t\t\tparentPrerun := parent.PersistentPreRunE\n\t\t\t\tif parentPrerun != nil {\n\t\t\t\t\terr := parentPrerun(cmd, args)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif verbose {\n\t\t\t\tlogrus.SetLevel(logrus.TraceLevel)\n\t\t\t}\n\n\t\t\terr := setEnvWithDotEnv(opts, dockerCli)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif noAnsi {\n\t\t\t\tif ansi != \"auto\" {\n\t\t\t\t\treturn errors.New(`cannot specify DEPRECATED \"--no-ansi\" and \"--ansi\". Please use only \"--ansi\"`)\n\t\t\t\t}\n\t\t\t\tansi = \"never\"\n\t\t\t\tfmt.Fprint(os.Stderr, \"option '--no-ansi' is DEPRECATED ! Please use '--ansi' instead.\\n\")\n\t\t\t}\n\t\t\tif v, ok := os.LookupEnv(\"COMPOSE_ANSI\"); ok && !cmd.Flags().Changed(\"ansi\") {\n\t\t\t\tansi = v\n\t\t\t}\n\t\t\tformatter.SetANSIMode(dockerCli, ansi)\n\n\t\t\tif noColor, ok := os.LookupEnv(\"NO_COLOR\"); ok && noColor != \"\" {\n\t\t\t\tdisplay.NoColor()\n\t\t\t\tformatter.SetANSIMode(dockerCli, formatter.Never)\n\t\t\t}\n\n\t\t\tswitch ansi {\n\t\t\tcase \"never\":\n\t\t\t\tdisplay.Mode = display.ModePlain\n\t\t\tcase \"always\":\n\t\t\t\tdisplay.Mode = display.ModeTTY\n\t\t\t}\n\n\t\t\tdetached, _ := cmd.Flags().GetBool(\"detach\")\n\t\t\tvar ep api.EventProcessor\n\t\t\tswitch opts.Progress {\n\t\t\tcase \"\", display.ModeAuto:\n\t\t\t\tswitch {\n\t\t\t\tcase ansi == \"never\":\n\t\t\t\t\tdisplay.Mode = display.ModePlain\n\t\t\t\t\tep = display.Plain(dockerCli.Err())\n\t\t\t\tcase dockerCli.Out().IsTerminal():\n\t\t\t\t\tep = display.Full(dockerCli.Err(), stdinfo(dockerCli), detached)\n\t\t\t\tdefault:\n\t\t\t\t\tep = display.Plain(dockerCli.Err())\n\t\t\t\t}\n\t\t\tcase display.ModeTTY:\n\t\t\t\tif ansi == \"never\" {\n\t\t\t\t\treturn fmt.Errorf(\"can't use --progress tty while ANSI support is disabled\")\n\t\t\t\t}\n\t\t\t\tdisplay.Mode = display.ModeTTY\n\t\t\t\tep = display.Full(dockerCli.Err(), stdinfo(dockerCli), detached)\n\n\t\t\tcase display.ModePlain:\n\t\t\t\tif ansi == \"always\" {\n\t\t\t\t\treturn fmt.Errorf(\"can't use --progress plain while ANSI support is forced\")\n\t\t\t\t}\n\t\t\t\tdisplay.Mode = display.ModePlain\n\t\t\t\tep = display.Plain(dockerCli.Err())\n\t\t\tcase display.ModeQuiet, \"none\":\n\t\t\t\tdisplay.Mode = display.ModeQuiet\n\t\t\t\tep = display.Quiet()\n\t\t\tcase display.ModeJSON:\n\t\t\t\tdisplay.Mode = display.ModeJSON\n\t\t\t\tlogrus.SetFormatter(&logrus.JSONFormatter{})\n\t\t\t\tep = display.JSON(dockerCli.Err())\n\t\t\tdefault:\n\t\t\t\treturn fmt.Errorf(\"unsupported --progress value %q\", opts.Progress)\n\t\t\t}\n\t\t\tbackendOptions.Add(compose.WithEventProcessor(ep))\n\n\t\t\t// (4) options validation / normalization\n\t\t\tif opts.WorkDir != \"\" {\n\t\t\t\tif opts.ProjectDir != \"\" {\n\t\t\t\t\treturn errors.New(`cannot specify DEPRECATED \"--workdir\" and \"--project-directory\". Please use only \"--project-directory\" instead`)\n\t\t\t\t}\n\t\t\t\topts.ProjectDir = opts.WorkDir\n\t\t\t\tfmt.Fprint(os.Stderr, aec.Apply(\"option '--workdir' is DEPRECATED at root level! Please use '--project-directory' instead.\\n\", aec.RedF))\n\t\t\t}\n\t\t\tfor i, file := range opts.EnvFiles {\n\t\t\t\tfile = composepaths.ExpandUser(file)\n\t\t\t\tif !filepath.IsAbs(file) {\n\t\t\t\t\tfile, err := filepath.Abs(file)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\topts.EnvFiles[i] = file\n\t\t\t\t} else {\n\t\t\t\t\topts.EnvFiles[i] = file\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tcomposeCmd := cmd\n\t\t\tfor composeCmd.Name() != PluginName {\n\t\t\t\tif !composeCmd.HasParent() {\n\t\t\t\t\treturn fmt.Errorf(\"error parsing command line, expected %q\", PluginName)\n\t\t\t\t}\n\t\t\t\tcomposeCmd = composeCmd.Parent()\n\t\t\t}\n\n\t\t\tif v, ok := os.LookupEnv(ComposeParallelLimit); ok && !composeCmd.Flags().Changed(\"parallel\") {\n\t\t\t\ti, err := strconv.Atoi(v)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn fmt.Errorf(\"%s must be an integer (found: %q)\", ComposeParallelLimit, v)\n\t\t\t\t}\n\t\t\t\tparallel = i\n\t\t\t}\n\t\t\tif parallel > 0 {\n\t\t\t\tlogrus.Debugf(\"Limiting max concurrency to %d jobs\", parallel)\n\t\t\t\tbackendOptions.Add(compose.WithMaxConcurrency(parallel))\n\t\t\t}\n\n\t\t\t// dry run detection\n\t\t\tif dryRun {\n\t\t\t\tbackendOptions.Add(compose.WithDryRun)\n\t\t\t}\n\t\t\treturn nil\n\t\t},\n\t}\n\n\tc.AddCommand(\n\t\tupCommand(&opts, dockerCli, backendOptions),\n\t\tdownCommand(&opts, dockerCli, backendOptions),\n\t\tstartCommand(&opts, dockerCli, backendOptions),\n\t\trestartCommand(&opts, dockerCli, backendOptions),\n\t\tstopCommand(&opts, dockerCli, backendOptions),\n\t\tpsCommand(&opts, dockerCli, backendOptions),\n\t\tlistCommand(dockerCli, backendOptions),\n\t\tlogsCommand(&opts, dockerCli, backendOptions),\n\t\tconfigCommand(&opts, dockerCli),\n\t\tkillCommand(&opts, dockerCli, backendOptions),\n\t\trunCommand(&opts, dockerCli, backendOptions),\n\t\tremoveCommand(&opts, dockerCli, backendOptions),\n\t\texecCommand(&opts, dockerCli, backendOptions),\n\t\tattachCommand(&opts, dockerCli, backendOptions),\n\t\texportCommand(&opts, dockerCli, backendOptions),\n\t\tcommitCommand(&opts, dockerCli, backendOptions),\n\t\tpauseCommand(&opts, dockerCli, backendOptions),\n\t\tunpauseCommand(&opts, dockerCli, backendOptions),\n\t\ttopCommand(&opts, dockerCli, backendOptions),\n\t\teventsCommand(&opts, dockerCli, backendOptions),\n\t\tportCommand(&opts, dockerCli, backendOptions),\n\t\timagesCommand(&opts, dockerCli, backendOptions),\n\t\tversionCommand(dockerCli),\n\t\tbuildCommand(&opts, dockerCli, backendOptions),\n\t\tpushCommand(&opts, dockerCli, backendOptions),\n\t\tpullCommand(&opts, dockerCli, backendOptions),\n\t\tcreateCommand(&opts, dockerCli, backendOptions),\n\t\tcopyCommand(&opts, dockerCli, backendOptions),\n\t\twaitCommand(&opts, dockerCli, backendOptions),\n\t\tscaleCommand(&opts, dockerCli, backendOptions),\n\t\tstatsCommand(&opts, dockerCli),\n\t\twatchCommand(&opts, dockerCli, backendOptions),\n\t\tpublishCommand(&opts, dockerCli, backendOptions),\n\t\talphaCommand(&opts, dockerCli, backendOptions),\n\t\tbridgeCommand(&opts, dockerCli),\n\t\tvolumesCommand(&opts, dockerCli, backendOptions),\n\t)\n\n\tc.Flags().SetInterspersed(false)\n\topts.addProjectFlags(c.Flags())\n\tc.RegisterFlagCompletionFunc( //nolint:errcheck\n\t\t\"project-name\",\n\t\tcompleteProjectNames(dockerCli, backendOptions),\n\t)\n\tc.RegisterFlagCompletionFunc( //nolint:errcheck\n\t\t\"project-directory\",\n\t\tfunc(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {\n\t\t\treturn []string{}, cobra.ShellCompDirectiveFilterDirs\n\t\t},\n\t)\n\tc.RegisterFlagCompletionFunc( //nolint:errcheck\n\t\t\"file\",\n\t\tfunc(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {\n\t\t\treturn []string{\"yaml\", \"yml\"}, cobra.ShellCompDirectiveFilterFileExt\n\t\t},\n\t)\n\tc.RegisterFlagCompletionFunc( //nolint:errcheck\n\t\t\"profile\",\n\t\tcompleteProfileNames(dockerCli, &opts),\n\t)\n\tc.RegisterFlagCompletionFunc( //nolint:errcheck\n\t\t\"progress\",\n\t\tcobra.FixedCompletions(printerModes, cobra.ShellCompDirectiveNoFileComp),\n\t)\n\n\tc.Flags().StringVar(&ansi, \"ansi\", \"auto\", `Control when to print ANSI control characters (\"never\"|\"always\"|\"auto\")`)\n\tc.Flags().IntVar(&parallel, \"parallel\", -1, `Control max parallelism, -1 for unlimited`)\n\tc.Flags().BoolVarP(&version, \"version\", \"v\", false, \"Show the Docker Compose version information\")\n\tc.PersistentFlags().BoolVar(&dryRun, \"dry-run\", false, \"Execute command in dry run mode\")\n\tc.Flags().MarkHidden(\"version\") //nolint:errcheck\n\tc.Flags().BoolVar(&noAnsi, \"no-ansi\", false, `Do not print ANSI control characters (DEPRECATED)`)\n\tc.Flags().MarkHidden(\"no-ansi\") //nolint:errcheck\n\tc.Flags().BoolVar(&verbose, \"verbose\", false, \"Show more output\")\n\tc.Flags().MarkHidden(\"verbose\") //nolint:errcheck\n\treturn c\n}\n\nfunc stdinfo(dockerCli command.Cli) io.Writer {\n\tif stdioToStdout {\n\t\treturn dockerCli.Out()\n\t}\n\treturn dockerCli.Err()\n}\n\nfunc setEnvWithDotEnv(opts ProjectOptions, dockerCli command.Cli) error {\n\t// Check if we're using a remote config (OCI or Git)\n\t// If so, skip env loading as remote loaders haven't been initialized yet\n\t// and trying to process the path would fail\n\tremoteLoaders := opts.remoteLoaders(dockerCli)\n\tfor _, path := range opts.ConfigPaths {\n\t\tfor _, loader := range remoteLoaders {\n\t\t\tif loader.Accept(path) {\n\t\t\t\t// Remote config - skip env loading for now\n\t\t\t\t// It will be loaded later when the project is fully initialized\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\t}\n\n\toptions, err := cli.NewProjectOptions(opts.ConfigPaths,\n\t\tcli.WithWorkingDirectory(opts.ProjectDir),\n\t\tcli.WithOsEnv,\n\t\tcli.WithEnvFiles(opts.EnvFiles...),\n\t\tcli.WithDotEnv,\n\t)\n\tif err != nil {\n\t\treturn err\n\t}\n\tenvFromFile, err := dotenv.GetEnvFromFile(composegoutils.GetAsEqualsMap(os.Environ()), options.EnvFiles)\n\tif err != nil {\n\t\treturn err\n\t}\n\tfor k, v := range envFromFile {\n\t\tif _, ok := os.LookupEnv(k); !ok && strings.HasPrefix(k, \"COMPOSE_\") {\n\t\t\tif err := os.Setenv(k, v); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nvar printerModes = []string{\n\tdisplay.ModeAuto,\n\tdisplay.ModeTTY,\n\tdisplay.ModePlain,\n\tdisplay.ModeJSON,\n\tdisplay.ModeQuiet,\n}\n"
  },
  {
    "path": "cmd/compose/compose_oci_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"testing\"\n\n\t\"go.uber.org/mock/gomock\"\n\t\"gotest.tools/v3/assert\"\n\n\t\"github.com/docker/compose/v5/pkg/mocks\"\n)\n\nfunc TestSetEnvWithDotEnv_WithOCIArtifact(t *testing.T) {\n\t// Test that setEnvWithDotEnv doesn't fail when using OCI artifacts\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\tcli := mocks.NewMockCli(ctrl)\n\n\topts := ProjectOptions{\n\t\tConfigPaths: []string{\"oci://docker.io/dockersamples/welcome-to-docker\"},\n\t\tProjectDir:  \"\",\n\t\tEnvFiles:    []string{},\n\t}\n\n\terr := setEnvWithDotEnv(opts, cli)\n\tassert.NilError(t, err, \"setEnvWithDotEnv should not fail with OCI artifact path\")\n}\n\nfunc TestSetEnvWithDotEnv_WithGitRemote(t *testing.T) {\n\t// Test that setEnvWithDotEnv doesn't fail when using Git remotes\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\tcli := mocks.NewMockCli(ctrl)\n\n\topts := ProjectOptions{\n\t\tConfigPaths: []string{\"https://github.com/docker/compose.git\"},\n\t\tProjectDir:  \"\",\n\t\tEnvFiles:    []string{},\n\t}\n\n\terr := setEnvWithDotEnv(opts, cli)\n\tassert.NilError(t, err, \"setEnvWithDotEnv should not fail with Git remote path\")\n}\n\nfunc TestSetEnvWithDotEnv_WithLocalPath(t *testing.T) {\n\t// Test that setEnvWithDotEnv still works with local paths\n\t// This will fail if the file doesn't exist, but it should not panic\n\t// or produce invalid paths\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\tcli := mocks.NewMockCli(ctrl)\n\n\topts := ProjectOptions{\n\t\tConfigPaths: []string{\"compose.yaml\"},\n\t\tProjectDir:  \"\",\n\t\tEnvFiles:    []string{},\n\t}\n\n\t// This may error if files don't exist, but should not panic\n\t_ = setEnvWithDotEnv(opts, cli)\n}\n"
  },
  {
    "path": "cmd/compose/compose_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc TestFilterServices(t *testing.T) {\n\tp := &types.Project{\n\t\tServices: types.Services{\n\t\t\t\"foo\": {\n\t\t\t\tName:  \"foo\",\n\t\t\t\tLinks: []string{\"bar\"},\n\t\t\t},\n\t\t\t\"bar\": {\n\t\t\t\tName: \"bar\",\n\t\t\t\tDependsOn: map[string]types.ServiceDependency{\n\t\t\t\t\t\"zot\": {},\n\t\t\t\t},\n\t\t\t},\n\t\t\t\"zot\": {\n\t\t\t\tName: \"zot\",\n\t\t\t},\n\t\t\t\"qix\": {\n\t\t\t\tName: \"qix\",\n\t\t\t},\n\t\t},\n\t}\n\tp, err := p.WithSelectedServices([]string{\"bar\"})\n\tassert.NilError(t, err)\n\n\tassert.Equal(t, len(p.Services), 2)\n\t_, err = p.GetService(\"bar\")\n\tassert.NilError(t, err)\n\t_, err = p.GetService(\"zot\")\n\tassert.NilError(t, err)\n}\n"
  },
  {
    "path": "cmd/compose/config.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"slices\"\n\t\"sort\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/cli\"\n\t\"github.com/compose-spec/compose-go/v2/template\"\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\t\"go.yaml.in/yaml/v4\"\n\n\t\"github.com/docker/compose/v5/cmd/formatter\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype configOptions struct {\n\t*ProjectOptions\n\tFormat              string\n\tOutput              string\n\tquiet               bool\n\tresolveImageDigests bool\n\tnoInterpolate       bool\n\tnoNormalize         bool\n\tnoResolvePath       bool\n\tnoResolveEnv        bool\n\tservices            bool\n\tvolumes             bool\n\tnetworks            bool\n\tmodels              bool\n\tprofiles            bool\n\timages              bool\n\thash                string\n\tnoConsistency       bool\n\tvariables           bool\n\tenvironment         bool\n\tlockImageDigests    bool\n}\n\nfunc (o *configOptions) ToProject(ctx context.Context, dockerCli command.Cli, backend api.Compose, services []string) (*types.Project, error) {\n\tproject, _, err := o.ProjectOptions.ToProject(ctx, dockerCli, backend, services, o.toProjectOptionsFns()...)\n\treturn project, err\n}\n\nfunc (o *configOptions) ToModel(ctx context.Context, dockerCli command.Cli, services []string, po ...cli.ProjectOptionsFn) (map[string]any, error) {\n\tpo = append(po, o.toProjectOptionsFns()...)\n\treturn o.ProjectOptions.ToModel(ctx, dockerCli, services, po...)\n}\n\n// toProjectOptionsFns converts config options to cli.ProjectOptionsFn\nfunc (o *configOptions) toProjectOptionsFns() []cli.ProjectOptionsFn {\n\tfns := []cli.ProjectOptionsFn{\n\t\tcli.WithInterpolation(!o.noInterpolate),\n\t\tcli.WithResolvedPaths(!o.noResolvePath),\n\t\tcli.WithNormalization(!o.noNormalize),\n\t\tcli.WithConsistency(!o.noConsistency),\n\t\tcli.WithDefaultProfiles(o.Profiles...),\n\t\tcli.WithDiscardEnvFile,\n\t}\n\tif o.noResolveEnv {\n\t\tfns = append(fns, cli.WithoutEnvironmentResolution)\n\t}\n\treturn fns\n}\n\nfunc configCommand(p *ProjectOptions, dockerCli command.Cli) *cobra.Command {\n\topts := configOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"config [OPTIONS] [SERVICE...]\",\n\t\tShort: \"Parse, resolve and render compose file in canonical format\",\n\t\tPreRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\tif opts.quiet {\n\t\t\t\tdevnull, err := os.Open(os.DevNull)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tos.Stdout = devnull\n\t\t\t}\n\t\t\tif p.Compatibility {\n\t\t\t\topts.noNormalize = true\n\t\t\t}\n\t\t\tif opts.lockImageDigests {\n\t\t\t\topts.resolveImageDigests = true\n\t\t\t}\n\t\t\treturn nil\n\t\t}),\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\tif opts.services {\n\t\t\t\treturn runServices(ctx, dockerCli, opts)\n\t\t\t}\n\t\t\tif opts.volumes {\n\t\t\t\treturn runVolumes(ctx, dockerCli, opts)\n\t\t\t}\n\t\t\tif opts.networks {\n\t\t\t\treturn runNetworks(ctx, dockerCli, opts)\n\t\t\t}\n\t\t\tif opts.models {\n\t\t\t\treturn runModels(ctx, dockerCli, opts)\n\t\t\t}\n\t\t\tif opts.hash != \"\" {\n\t\t\t\treturn runHash(ctx, dockerCli, opts)\n\t\t\t}\n\t\t\tif opts.profiles {\n\t\t\t\treturn runProfiles(ctx, dockerCli, opts, args)\n\t\t\t}\n\t\t\tif opts.images {\n\t\t\t\treturn runConfigImages(ctx, dockerCli, opts, args)\n\t\t\t}\n\t\t\tif opts.variables {\n\t\t\t\treturn runVariables(ctx, dockerCli, opts, args)\n\t\t\t}\n\t\t\tif opts.environment {\n\t\t\t\treturn runEnvironment(ctx, dockerCli, opts, args)\n\t\t\t}\n\n\t\t\tif opts.Format == \"\" {\n\t\t\t\topts.Format = \"yaml\"\n\t\t\t}\n\t\t\treturn runConfig(ctx, dockerCli, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tflags := cmd.Flags()\n\tflags.StringVar(&opts.Format, \"format\", \"\", \"Format the output. Values: [yaml | json]\")\n\tflags.BoolVar(&opts.resolveImageDigests, \"resolve-image-digests\", false, \"Pin image tags to digests\")\n\tflags.BoolVar(&opts.lockImageDigests, \"lock-image-digests\", false, \"Produces an override file with image digests\")\n\tflags.BoolVarP(&opts.quiet, \"quiet\", \"q\", false, \"Only validate the configuration, don't print anything\")\n\tflags.BoolVar(&opts.noInterpolate, \"no-interpolate\", false, \"Don't interpolate environment variables\")\n\tflags.BoolVar(&opts.noNormalize, \"no-normalize\", false, \"Don't normalize compose model\")\n\tflags.BoolVar(&opts.noResolvePath, \"no-path-resolution\", false, \"Don't resolve file paths\")\n\tflags.BoolVar(&opts.noConsistency, \"no-consistency\", false, \"Don't check model consistency - warning: may produce invalid Compose output\")\n\tflags.BoolVar(&opts.noResolveEnv, \"no-env-resolution\", false, \"Don't resolve service env files\")\n\n\tflags.BoolVar(&opts.services, \"services\", false, \"Print the service names, one per line.\")\n\tflags.BoolVar(&opts.volumes, \"volumes\", false, \"Print the volume names, one per line.\")\n\tflags.BoolVar(&opts.networks, \"networks\", false, \"Print the network names, one per line.\")\n\tflags.BoolVar(&opts.models, \"models\", false, \"Print the model names, one per line.\")\n\tflags.BoolVar(&opts.profiles, \"profiles\", false, \"Print the profile names, one per line.\")\n\tflags.BoolVar(&opts.images, \"images\", false, \"Print the image names, one per line.\")\n\tflags.StringVar(&opts.hash, \"hash\", \"\", \"Print the service config hash, one per line.\")\n\tflags.BoolVar(&opts.variables, \"variables\", false, \"Print model variables and default values.\")\n\tflags.BoolVar(&opts.environment, \"environment\", false, \"Print environment used for interpolation.\")\n\tflags.StringVarP(&opts.Output, \"output\", \"o\", \"\", \"Save to file (default to stdout)\")\n\n\treturn cmd\n}\n\nfunc runConfig(ctx context.Context, dockerCli command.Cli, opts configOptions, services []string) (err error) {\n\tvar content []byte\n\tif opts.noInterpolate {\n\t\tcontent, err = runConfigNoInterpolate(ctx, dockerCli, opts, services)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t} else {\n\t\tcontent, err = runConfigInterpolate(ctx, dockerCli, opts, services)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif !opts.noInterpolate {\n\t\tcontent = escapeDollarSign(content)\n\t}\n\n\tif opts.quiet {\n\t\treturn nil\n\t}\n\n\tif opts.Output != \"\" && len(content) > 0 {\n\t\treturn os.WriteFile(opts.Output, content, 0o666)\n\t}\n\t_, err = fmt.Fprint(dockerCli.Out(), string(content))\n\treturn err\n}\n\nfunc runConfigInterpolate(ctx context.Context, dockerCli command.Cli, opts configOptions, services []string) ([]byte, error) {\n\tbackend, err := compose.NewComposeService(dockerCli)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tproject, err := opts.ToProject(ctx, dockerCli, backend, services)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif opts.resolveImageDigests {\n\t\tproject, err = project.WithImagesResolved(compose.ImageDigestResolver(ctx, dockerCli.ConfigFile(), dockerCli.Client()))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tif !opts.noResolveEnv {\n\t\tproject, err = project.WithServicesEnvironmentResolved(true)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tif !opts.noConsistency {\n\t\terr := project.CheckContainerNameUnicity()\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tif opts.lockImageDigests {\n\t\tproject = imagesOnly(project)\n\t}\n\n\tvar content []byte\n\tswitch opts.Format {\n\tcase \"json\":\n\t\tcontent, err = project.MarshalJSON()\n\tcase \"yaml\":\n\t\tcontent, err = project.MarshalYAML()\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"unsupported format %q\", opts.Format)\n\t}\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn content, nil\n}\n\n// imagesOnly return project with all attributes removed but service.images\nfunc imagesOnly(project *types.Project) *types.Project {\n\tdigests := types.Services{}\n\tfor name, config := range project.Services {\n\t\tdigests[name] = types.ServiceConfig{\n\t\t\tImage: config.Image,\n\t\t}\n\t}\n\tproject = &types.Project{Services: digests}\n\treturn project\n}\n\nfunc runConfigNoInterpolate(ctx context.Context, dockerCli command.Cli, opts configOptions, services []string) ([]byte, error) {\n\t// we can't use ToProject, so the model we render here is only partially resolved\n\tmodel, err := opts.ToModel(ctx, dockerCli, services)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif opts.resolveImageDigests {\n\t\terr = resolveImageDigests(ctx, dockerCli, model)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tif opts.lockImageDigests {\n\t\tfor key, e := range model {\n\t\t\tif key != \"services\" {\n\t\t\t\tdelete(model, key)\n\t\t\t} else {\n\t\t\t\tfor _, s := range e.(map[string]any) {\n\t\t\t\t\tservice := s.(map[string]any)\n\t\t\t\t\tfor key := range service {\n\t\t\t\t\t\tif key != \"image\" {\n\t\t\t\t\t\t\tdelete(service, key)\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn formatModel(model, opts.Format)\n}\n\nfunc resolveImageDigests(ctx context.Context, dockerCli command.Cli, model map[string]any) (err error) {\n\t// create a pseudo-project so we can rely on WithImagesResolved to resolve images\n\tp := &types.Project{\n\t\tServices: types.Services{},\n\t}\n\tservices := model[\"services\"].(map[string]any)\n\tfor name, s := range services {\n\t\tservice := s.(map[string]any)\n\t\tif image, ok := service[\"image\"]; ok {\n\t\t\tp.Services[name] = types.ServiceConfig{\n\t\t\t\tImage: image.(string),\n\t\t\t}\n\t\t}\n\t}\n\n\tp, err = p.WithImagesResolved(compose.ImageDigestResolver(ctx, dockerCli.ConfigFile(), dockerCli.Client()))\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// Collect image resolved with digest and update model accordingly\n\tfor name, s := range services {\n\t\tservice := s.(map[string]any)\n\t\tconfig := p.Services[name]\n\t\tif config.Image != \"\" {\n\t\t\tservice[\"image\"] = config.Image\n\t\t}\n\t\tservices[name] = service\n\t}\n\tmodel[\"services\"] = services\n\treturn nil\n}\n\nfunc formatModel(model map[string]any, format string) (content []byte, err error) {\n\tswitch format {\n\tcase \"json\":\n\t\treturn json.MarshalIndent(model, \"\", \"  \")\n\tcase \"yaml\":\n\t\tbuf := bytes.NewBuffer([]byte{})\n\t\tencoder := yaml.NewEncoder(buf)\n\t\tencoder.SetIndent(2)\n\t\terr = encoder.Encode(model)\n\t\treturn buf.Bytes(), err\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"unsupported format %q\", format)\n\t}\n}\n\nfunc runServices(ctx context.Context, dockerCli command.Cli, opts configOptions) error {\n\tif opts.noInterpolate {\n\t\t// we can't use ToProject, so the model we render here is only partially resolved\n\t\tdata, err := opts.ToModel(ctx, dockerCli, nil, cli.WithoutEnvironmentResolution)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tif _, ok := data[\"services\"]; ok {\n\t\t\tfor serviceName := range data[\"services\"].(map[string]any) {\n\t\t\t\t_, _ = fmt.Fprintln(dockerCli.Out(), serviceName)\n\t\t\t}\n\t\t}\n\n\t\treturn nil\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, _, err := opts.ProjectOptions.ToProject(ctx, dockerCli, backend, nil, cli.WithoutEnvironmentResolution)\n\tif err != nil {\n\t\treturn err\n\t}\n\terr = project.ForEachService(project.ServiceNames(), func(serviceName string, _ *types.ServiceConfig) error {\n\t\t_, _ = fmt.Fprintln(dockerCli.Out(), serviceName)\n\t\treturn nil\n\t})\n\n\treturn err\n}\n\nfunc runVolumes(ctx context.Context, dockerCli command.Cli, opts configOptions) error {\n\tbackend, err := compose.NewComposeService(dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, _, err := opts.ProjectOptions.ToProject(ctx, dockerCli, backend, nil, cli.WithoutEnvironmentResolution)\n\tif err != nil {\n\t\treturn err\n\t}\n\tfor n := range project.Volumes {\n\t\t_, _ = fmt.Fprintln(dockerCli.Out(), n)\n\t}\n\treturn nil\n}\n\nfunc runNetworks(ctx context.Context, dockerCli command.Cli, opts configOptions) error {\n\tbackend, err := compose.NewComposeService(dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, _, err := opts.ProjectOptions.ToProject(ctx, dockerCli, backend, nil, cli.WithoutEnvironmentResolution)\n\tif err != nil {\n\t\treturn err\n\t}\n\tfor n := range project.Networks {\n\t\t_, _ = fmt.Fprintln(dockerCli.Out(), n)\n\t}\n\treturn nil\n}\n\nfunc runModels(ctx context.Context, dockerCli command.Cli, opts configOptions) error {\n\tbackend, err := compose.NewComposeService(dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, _, err := opts.ProjectOptions.ToProject(ctx, dockerCli, backend, nil, cli.WithoutEnvironmentResolution)\n\tif err != nil {\n\t\treturn err\n\t}\n\tfor _, model := range project.Models {\n\t\tif model.Model != \"\" {\n\t\t\t_, _ = fmt.Fprintln(dockerCli.Out(), model.Model)\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc runHash(ctx context.Context, dockerCli command.Cli, opts configOptions) error {\n\tvar services []string\n\tif opts.hash != \"*\" {\n\t\tservices = append(services, strings.Split(opts.hash, \",\")...)\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, _, err := opts.ProjectOptions.ToProject(ctx, dockerCli, backend, nil, cli.WithoutEnvironmentResolution)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif err := applyPlatforms(project, true); err != nil {\n\t\treturn err\n\t}\n\n\tif len(services) == 0 {\n\t\tservices = project.ServiceNames()\n\t}\n\n\tsorted := services\n\tslices.Sort(sorted)\n\n\tfor _, name := range sorted {\n\t\ts, err := project.GetService(name)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\thash, err := compose.ServiceHash(s)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, _ = fmt.Fprintf(dockerCli.Out(), \"%s %s\\n\", name, hash)\n\t}\n\treturn nil\n}\n\nfunc runProfiles(ctx context.Context, dockerCli command.Cli, opts configOptions, services []string) error {\n\tset := map[string]struct{}{}\n\n\tbackend, err := compose.NewComposeService(dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, err := opts.ToProject(ctx, dockerCli, backend, services)\n\tif err != nil {\n\t\treturn err\n\t}\n\tfor _, s := range project.AllServices() {\n\t\tfor _, p := range s.Profiles {\n\t\t\tset[p] = struct{}{}\n\t\t}\n\t}\n\tprofiles := make([]string, 0, len(set))\n\tfor p := range set {\n\t\tprofiles = append(profiles, p)\n\t}\n\tsort.Strings(profiles)\n\tfor _, p := range profiles {\n\t\t_, _ = fmt.Fprintln(dockerCli.Out(), p)\n\t}\n\treturn nil\n}\n\nfunc runConfigImages(ctx context.Context, dockerCli command.Cli, opts configOptions, services []string) error {\n\tbackend, err := compose.NewComposeService(dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, err := opts.ToProject(ctx, dockerCli, backend, services)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfor _, s := range project.Services {\n\t\t_, _ = fmt.Fprintln(dockerCli.Out(), api.GetImageNameOrDefault(s, project.Name))\n\t}\n\treturn nil\n}\n\nfunc runVariables(ctx context.Context, dockerCli command.Cli, opts configOptions, services []string) error {\n\topts.noInterpolate = true\n\tmodel, err := opts.ToModel(ctx, dockerCli, services, cli.WithoutEnvironmentResolution)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvariables := template.ExtractVariables(model, template.DefaultPattern)\n\n\tif opts.Format == \"yaml\" {\n\t\tresult, err := yaml.Marshal(variables)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tfmt.Print(string(result))\n\t\treturn nil\n\t}\n\n\treturn formatter.Print(variables, opts.Format, dockerCli.Out(), func(w io.Writer) {\n\t\tfor name, variable := range variables {\n\t\t\t_, _ = fmt.Fprintf(w, \"%s\\t%t\\t%s\\t%s\\n\", name, variable.Required, variable.DefaultValue, variable.PresenceValue)\n\t\t}\n\t}, \"NAME\", \"REQUIRED\", \"DEFAULT VALUE\", \"ALTERNATE VALUE\")\n}\n\nfunc runEnvironment(ctx context.Context, dockerCli command.Cli, opts configOptions, services []string) error {\n\tbackend, err := compose.NewComposeService(dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, err := opts.ToProject(ctx, dockerCli, backend, services)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfor _, v := range project.Environment.Values() {\n\t\tfmt.Println(v)\n\t}\n\treturn nil\n}\n\nfunc escapeDollarSign(marshal []byte) []byte {\n\tdollar := []byte{'$'}\n\tescDollar := []byte{'$', '$'}\n\treturn bytes.ReplaceAll(marshal, dollar, escDollar)\n}\n"
  },
  {
    "path": "cmd/compose/cp.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\n\t\"github.com/docker/cli/cli\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype copyOptions struct {\n\t*ProjectOptions\n\n\tsource      string\n\tdestination string\n\tindex       int\n\tall         bool\n\tfollowLink  bool\n\tcopyUIDGID  bool\n}\n\nfunc copyCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := copyOptions{\n\t\tProjectOptions: p,\n\t}\n\tcopyCmd := &cobra.Command{\n\t\tUse: `cp [OPTIONS] SERVICE:SRC_PATH DEST_PATH|-\n\tdocker compose cp [OPTIONS] SRC_PATH|- SERVICE:DEST_PATH`,\n\t\tShort: \"Copy files/folders between a service container and the local filesystem\",\n\t\tArgs:  cli.ExactArgs(2),\n\t\tPreRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\tif args[0] == \"\" {\n\t\t\t\treturn errors.New(\"source can not be empty\")\n\t\t\t}\n\t\t\tif args[1] == \"\" {\n\t\t\t\treturn errors.New(\"destination can not be empty\")\n\t\t\t}\n\t\t\treturn nil\n\t\t}),\n\t\tRunE: AdaptCmd(func(ctx context.Context, cmd *cobra.Command, args []string) error {\n\t\t\topts.source = args[0]\n\t\t\topts.destination = args[1]\n\t\t\treturn runCopy(ctx, dockerCli, backendOptions, opts)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\n\tflags := copyCmd.Flags()\n\tflags.IntVar(&opts.index, \"index\", 0, \"Index of the container if service has multiple replicas\")\n\tflags.BoolVar(&opts.all, \"all\", false, \"Include containers created by the run command\")\n\tflags.BoolVarP(&opts.followLink, \"follow-link\", \"L\", false, \"Always follow symbol link in SRC_PATH\")\n\tflags.BoolVarP(&opts.copyUIDGID, \"archive\", \"a\", false, \"Archive mode (copy all uid/gid information)\")\n\n\treturn copyCmd\n}\n\nfunc runCopy(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts copyOptions) error {\n\tname, err := opts.toProjectName(ctx, dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn backend.Copy(ctx, name, api.CopyOptions{\n\t\tSource:      opts.source,\n\t\tDestination: opts.destination,\n\t\tAll:         opts.all,\n\t\tIndex:       opts.index,\n\t\tFollowLink:  opts.followLink,\n\t\tCopyUIDGID:  opts.copyUIDGID,\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/create.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/spf13/cobra\"\n\t\"github.com/spf13/pflag\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype createOptions struct {\n\tBuild         bool\n\tnoBuild       bool\n\tPull          string\n\tpullChanged   bool\n\tremoveOrphans bool\n\tignoreOrphans bool\n\tforceRecreate bool\n\tnoRecreate    bool\n\trecreateDeps  bool\n\tnoInherit     bool\n\ttimeChanged   bool\n\ttimeout       int\n\tquietPull     bool\n\tscale         []string\n\tAssumeYes     bool\n}\n\nfunc createCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := createOptions{}\n\tbuildOpts := buildOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"create [OPTIONS] [SERVICE...]\",\n\t\tShort: \"Creates containers for a service\",\n\t\tPreRunE: AdaptCmd(func(ctx context.Context, cmd *cobra.Command, args []string) error {\n\t\t\topts.pullChanged = cmd.Flags().Changed(\"pull\")\n\t\t\tif opts.Build && opts.noBuild {\n\t\t\t\treturn fmt.Errorf(\"--build and --no-build are incompatible\")\n\t\t\t}\n\t\t\tif opts.forceRecreate && opts.noRecreate {\n\t\t\t\treturn fmt.Errorf(\"--force-recreate and --no-recreate are incompatible\")\n\t\t\t}\n\t\t\treturn nil\n\t\t}),\n\t\tRunE: p.WithServices(dockerCli, func(ctx context.Context, project *types.Project, services []string) error {\n\t\t\treturn runCreate(ctx, dockerCli, backendOptions, opts, buildOpts, project, services)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tflags := cmd.Flags()\n\tflags.BoolVar(&opts.Build, \"build\", false, \"Build images before starting containers\")\n\tflags.BoolVar(&opts.noBuild, \"no-build\", false, \"Don't build an image, even if it's policy\")\n\tflags.StringVar(&opts.Pull, \"pull\", \"policy\", `Pull image before running (\"always\"|\"missing\"|\"never\"|\"build\")`)\n\tflags.BoolVar(&opts.quietPull, \"quiet-pull\", false, \"Pull without printing progress information\")\n\tflags.BoolVar(&opts.forceRecreate, \"force-recreate\", false, \"Recreate containers even if their configuration and image haven't changed\")\n\tflags.BoolVar(&opts.noRecreate, \"no-recreate\", false, \"If containers already exist, don't recreate them. Incompatible with --force-recreate.\")\n\tflags.BoolVar(&opts.removeOrphans, \"remove-orphans\", false, \"Remove containers for services not defined in the Compose file\")\n\tflags.StringArrayVar(&opts.scale, \"scale\", []string{}, \"Scale SERVICE to NUM instances. Overrides the `scale` setting in the Compose file if present.\")\n\tflags.BoolVarP(&opts.AssumeYes, \"yes\", \"y\", false, `Assume \"yes\" as answer to all prompts and run non-interactively`)\n\tflags.SetNormalizeFunc(func(f *pflag.FlagSet, name string) pflag.NormalizedName {\n\t\t// assumeYes was introduced by mistake as `--y`\n\t\tif name == \"y\" {\n\t\t\tlogrus.Warn(\"--y is deprecated, please use --yes instead\")\n\t\t\tname = \"yes\"\n\t\t}\n\t\treturn pflag.NormalizedName(name)\n\t})\n\treturn cmd\n}\n\nfunc runCreate(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, createOpts createOptions, buildOpts buildOptions, project *types.Project, services []string) error {\n\tif err := createOpts.Apply(project); err != nil {\n\t\treturn err\n\t}\n\n\tvar build *api.BuildOptions\n\tif !createOpts.noBuild {\n\t\tbo, err := buildOpts.toAPIBuildOptions(services)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tbuild = &bo\n\t}\n\n\tif createOpts.AssumeYes {\n\t\tbackendOptions.Options = append(backendOptions.Options, compose.WithPrompt(compose.AlwaysOkPrompt()))\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn backend.Create(ctx, project, api.CreateOptions{\n\t\tBuild:                build,\n\t\tServices:             services,\n\t\tRemoveOrphans:        createOpts.removeOrphans,\n\t\tIgnoreOrphans:        createOpts.ignoreOrphans,\n\t\tRecreate:             createOpts.recreateStrategy(),\n\t\tRecreateDependencies: createOpts.dependenciesRecreateStrategy(),\n\t\tInherit:              !createOpts.noInherit,\n\t\tTimeout:              createOpts.GetTimeout(),\n\t\tQuietPull:            createOpts.quietPull,\n\t})\n}\n\nfunc (opts createOptions) recreateStrategy() string {\n\tif opts.noRecreate {\n\t\treturn api.RecreateNever\n\t}\n\tif opts.forceRecreate {\n\t\treturn api.RecreateForce\n\t}\n\tif opts.noInherit {\n\t\treturn api.RecreateForce\n\t}\n\treturn api.RecreateDiverged\n}\n\nfunc (opts createOptions) dependenciesRecreateStrategy() string {\n\tif opts.noRecreate {\n\t\treturn api.RecreateNever\n\t}\n\tif opts.recreateDeps {\n\t\treturn api.RecreateForce\n\t}\n\treturn api.RecreateDiverged\n}\n\nfunc (opts createOptions) GetTimeout() *time.Duration {\n\tif opts.timeChanged {\n\t\tt := time.Duration(opts.timeout) * time.Second\n\t\treturn &t\n\t}\n\treturn nil\n}\n\nfunc (opts createOptions) Apply(project *types.Project) error {\n\tif opts.pullChanged {\n\t\tif !opts.isPullPolicyValid() {\n\t\t\treturn fmt.Errorf(\"invalid --pull option %q\", opts.Pull)\n\t\t}\n\t\tfor i, service := range project.Services {\n\t\t\tservice.PullPolicy = opts.Pull\n\t\t\tproject.Services[i] = service\n\t\t}\n\t}\n\t// N.B. opts.Build means \"force build all\", but images can still be built\n\t// when this is false\n\t// e.g. if a service has pull_policy: build or its local image is policy\n\tif opts.Build {\n\t\tfor i, service := range project.Services {\n\t\t\tif service.Build == nil {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tservice.PullPolicy = types.PullPolicyBuild\n\t\t\tproject.Services[i] = service\n\t\t}\n\t}\n\n\tif err := applyPlatforms(project, true); err != nil {\n\t\treturn err\n\t}\n\n\terr := applyScaleOpts(project, opts.scale)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc applyScaleOpts(project *types.Project, opts []string) error {\n\tfor _, scale := range opts {\n\t\tname, val, ok := strings.Cut(scale, \"=\")\n\t\tif !ok || val == \"\" {\n\t\t\treturn fmt.Errorf(\"invalid --scale option %q. Should be SERVICE=NUM\", scale)\n\t\t}\n\t\treplicas, err := strconv.Atoi(val)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\terr = setServiceScale(project, name, replicas)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (opts createOptions) isPullPolicyValid() bool {\n\tpullPolicies := []string{\n\t\ttypes.PullPolicyAlways, types.PullPolicyNever, types.PullPolicyBuild,\n\t\ttypes.PullPolicyMissing, types.PullPolicyIfNotPresent,\n\t}\n\treturn slices.Contains(pullPolicies, opts.Pull)\n}\n"
  },
  {
    "path": "cmd/compose/down.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"time\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/spf13/cobra\"\n\t\"github.com/spf13/pflag\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\ntype downOptions struct {\n\t*ProjectOptions\n\tremoveOrphans bool\n\ttimeChanged   bool\n\ttimeout       int\n\tvolumes       bool\n\timages        string\n}\n\nfunc downCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := downOptions{\n\t\tProjectOptions: p,\n\t}\n\tdownCmd := &cobra.Command{\n\t\tUse:   \"down [OPTIONS] [SERVICES]\",\n\t\tShort: \"Stop and remove containers, networks\",\n\t\tPreRunE: AdaptCmd(func(ctx context.Context, cmd *cobra.Command, args []string) error {\n\t\t\topts.timeChanged = cmd.Flags().Changed(\"timeout\")\n\t\t\tif opts.images != \"\" {\n\t\t\t\tif opts.images != \"all\" && opts.images != \"local\" {\n\t\t\t\t\treturn fmt.Errorf(\"invalid value for --rmi: %q\", opts.images)\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn nil\n\t\t}),\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runDown(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tflags := downCmd.Flags()\n\tremoveOrphans := utils.StringToBool(os.Getenv(ComposeRemoveOrphans))\n\tflags.BoolVar(&opts.removeOrphans, \"remove-orphans\", removeOrphans, \"Remove containers for services not defined in the Compose file\")\n\tflags.IntVarP(&opts.timeout, \"timeout\", \"t\", 0, \"Specify a shutdown timeout in seconds\")\n\tflags.BoolVarP(&opts.volumes, \"volumes\", \"v\", false, `Remove named volumes declared in the \"volumes\" section of the Compose file and anonymous volumes attached to containers`)\n\tflags.StringVar(&opts.images, \"rmi\", \"\", `Remove images used by services. \"local\" remove only images that don't have a custom tag (\"local\"|\"all\")`)\n\tflags.SetNormalizeFunc(func(f *pflag.FlagSet, name string) pflag.NormalizedName {\n\t\tif name == \"volume\" {\n\t\t\tname = \"volumes\"\n\t\t\tlogrus.Warn(\"--volume is deprecated, please use --volumes\")\n\t\t}\n\t\treturn pflag.NormalizedName(name)\n\t})\n\treturn downCmd\n}\n\nfunc runDown(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts downOptions, services []string) error {\n\tproject, name, err := opts.projectOrName(ctx, dockerCli, services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar timeout *time.Duration\n\tif opts.timeChanged {\n\t\ttimeoutValue := time.Duration(opts.timeout) * time.Second\n\t\ttimeout = &timeoutValue\n\t}\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn backend.Down(ctx, name, api.DownOptions{\n\t\tRemoveOrphans: opts.removeOrphans,\n\t\tProject:       project,\n\t\tTimeout:       timeout,\n\t\tImages:        opts.images,\n\t\tVolumes:       opts.volumes,\n\t\tServices:      services,\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/events.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype eventsOpts struct {\n\t*composeOptions\n\tjson  bool\n\tsince string\n\tuntil string\n}\n\nfunc eventsCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := eventsOpts{\n\t\tcomposeOptions: &composeOptions{\n\t\t\tProjectOptions: p,\n\t\t},\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"events [OPTIONS] [SERVICE...]\",\n\t\tShort: \"Receive real time events from containers\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runEvents(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\n\tcmd.Flags().BoolVar(&opts.json, \"json\", false, \"Output events as a stream of json objects\")\n\tcmd.Flags().StringVar(&opts.since, \"since\", \"\", \"Show all events created since timestamp\")\n\tcmd.Flags().StringVar(&opts.until, \"until\", \"\", \"Stream events until this timestamp\")\n\treturn cmd\n}\n\nfunc runEvents(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts eventsOpts, services []string) error {\n\tname, err := opts.toProjectName(ctx, dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn backend.Events(ctx, name, api.EventsOptions{\n\t\tServices: services,\n\t\tSince:    opts.since,\n\t\tUntil:    opts.until,\n\t\tConsumer: func(event api.Event) error {\n\t\t\tif opts.json {\n\t\t\t\tmarshal, err := json.Marshal(map[string]any{\n\t\t\t\t\t\"time\":       event.Timestamp,\n\t\t\t\t\t\"type\":       \"container\",\n\t\t\t\t\t\"service\":    event.Service,\n\t\t\t\t\t\"id\":         event.Container,\n\t\t\t\t\t\"action\":     event.Status,\n\t\t\t\t\t\"attributes\": event.Attributes,\n\t\t\t\t})\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\t_, _ = fmt.Fprintln(dockerCli.Out(), string(marshal))\n\t\t\t} else {\n\t\t\t\t_, _ = fmt.Fprintln(dockerCli.Out(), event)\n\t\t\t}\n\t\t\treturn nil\n\t\t},\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/exec.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/spf13/cobra\"\n\t\"github.com/spf13/pflag\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype execOpts struct {\n\t*composeOptions\n\n\tservice     string\n\tcommand     []string\n\tenvironment []string\n\tworkingDir  string\n\n\tnoTty       bool\n\tuser        string\n\tdetach      bool\n\tindex       int\n\tprivileged  bool\n\tinteractive bool\n}\n\nfunc execCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := execOpts{\n\t\tcomposeOptions: &composeOptions{\n\t\t\tProjectOptions: p,\n\t\t},\n\t}\n\trunCmd := &cobra.Command{\n\t\tUse:   \"exec [OPTIONS] SERVICE COMMAND [ARGS...]\",\n\t\tShort: \"Execute a command in a running container\",\n\t\tArgs:  cobra.MinimumNArgs(2),\n\t\tPreRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\topts.service = args[0]\n\t\t\topts.command = args[1:]\n\t\t\treturn nil\n\t\t}),\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\terr := runExec(ctx, dockerCli, backendOptions, opts)\n\t\t\tif err != nil {\n\t\t\t\tlogrus.Debugf(\"%v\", err)\n\t\t\t\tvar cliError cli.StatusError\n\t\t\t\tif ok := errors.As(err, &cliError); ok {\n\t\t\t\t\tos.Exit(err.(cli.StatusError).StatusCode) //nolint: errorlint\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn err\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\n\trunCmd.Flags().BoolVarP(&opts.detach, \"detach\", \"d\", false, \"Detached mode: Run command in the background\")\n\trunCmd.Flags().StringArrayVarP(&opts.environment, \"env\", \"e\", []string{}, \"Set environment variables\")\n\trunCmd.Flags().IntVar(&opts.index, \"index\", 0, \"Index of the container if service has multiple replicas\")\n\trunCmd.Flags().BoolVarP(&opts.privileged, \"privileged\", \"\", false, \"Give extended privileges to the process\")\n\trunCmd.Flags().StringVarP(&opts.user, \"user\", \"u\", \"\", \"Run the command as this user\")\n\trunCmd.Flags().BoolVarP(&opts.noTty, \"no-tty\", \"T\", !dockerCli.Out().IsTerminal(), \"Disable pseudo-TTY allocation. By default 'docker compose exec' allocates a TTY.\")\n\trunCmd.Flags().StringVarP(&opts.workingDir, \"workdir\", \"w\", \"\", \"Path to workdir directory for this command\")\n\n\trunCmd.Flags().BoolVarP(&opts.interactive, \"interactive\", \"i\", true, \"Keep STDIN open even if not attached\")\n\trunCmd.Flags().MarkHidden(\"interactive\") //nolint:errcheck\n\trunCmd.Flags().BoolP(\"tty\", \"t\", true, \"Allocate a pseudo-TTY\")\n\trunCmd.Flags().MarkHidden(\"tty\") //nolint:errcheck\n\n\trunCmd.Flags().SetInterspersed(false)\n\trunCmd.Flags().SetNormalizeFunc(func(f *pflag.FlagSet, name string) pflag.NormalizedName {\n\t\tif name == \"no-TTY\" { // legacy\n\t\t\tname = \"no-tty\"\n\t\t}\n\t\treturn pflag.NormalizedName(name)\n\t})\n\treturn runCmd\n}\n\nfunc runExec(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts execOpts) error {\n\tprojectName, err := opts.toProjectName(ctx, dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\tprojectOptions, err := opts.composeOptions.toProjectOptions() //nolint:staticcheck\n\tif err != nil {\n\t\treturn err\n\t}\n\tlookupFn := func(k string) (string, bool) {\n\t\tv, ok := projectOptions.Environment[k]\n\t\treturn v, ok\n\t}\n\texecOpts := api.RunOptions{\n\t\tService:     opts.service,\n\t\tCommand:     opts.command,\n\t\tEnvironment: compose.ToMobyEnv(types.NewMappingWithEquals(opts.environment).Resolve(lookupFn)),\n\t\tTty:         !opts.noTty,\n\t\tUser:        opts.user,\n\t\tPrivileged:  opts.privileged,\n\t\tIndex:       opts.index,\n\t\tDetach:      opts.detach,\n\t\tWorkingDir:  opts.workingDir,\n\t\tInteractive: opts.interactive,\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\texitCode, err := backend.Exec(ctx, projectName, execOpts)\n\tif exitCode != 0 {\n\t\terrMsg := fmt.Sprintf(\"exit status %d\", exitCode)\n\t\tif err != nil && err.Error() != \"\" {\n\t\t\terrMsg = err.Error()\n\t\t}\n\t\treturn cli.StatusError{StatusCode: exitCode, Status: errMsg}\n\t}\n\treturn err\n}\n"
  },
  {
    "path": "cmd/compose/export.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype exportOptions struct {\n\t*ProjectOptions\n\n\tservice string\n\toutput  string\n\tindex   int\n}\n\nfunc exportCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\toptions := exportOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"export [OPTIONS] SERVICE\",\n\t\tShort: \"Export a service container's filesystem as a tar archive\",\n\t\tArgs:  cobra.MinimumNArgs(1),\n\t\tPreRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\toptions.service = args[0]\n\t\t\treturn nil\n\t\t}),\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runExport(ctx, dockerCli, backendOptions, options)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\n\tflags := cmd.Flags()\n\tflags.IntVar(&options.index, \"index\", 0, \"index of the container if service has multiple replicas.\")\n\tflags.StringVarP(&options.output, \"output\", \"o\", \"\", \"Write to a file, instead of STDOUT\")\n\n\treturn cmd\n}\n\nfunc runExport(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, options exportOptions) error {\n\tprojectName, err := options.toProjectName(ctx, dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\texportOptions := api.ExportOptions{\n\t\tService: options.service,\n\t\tIndex:   options.index,\n\t\tOutput:  options.output,\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn backend.Export(ctx, projectName, exportOptions)\n}\n"
  },
  {
    "path": "cmd/compose/generate.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype generateOptions struct {\n\t*ProjectOptions\n\tFormat string\n}\n\nfunc generateCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := generateOptions{\n\t\tProjectOptions: p,\n\t}\n\n\tcmd := &cobra.Command{\n\t\tUse:   \"generate [OPTIONS] [CONTAINERS...]\",\n\t\tShort: \"EXPERIMENTAL - Generate a Compose file from existing containers\",\n\t\tPreRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn nil\n\t\t}),\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runGenerate(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t}\n\n\tcmd.Flags().StringVar(&opts.ProjectName, \"name\", \"\", \"Project name to set in the Compose file\")\n\tcmd.Flags().StringVar(&opts.ProjectDir, \"project-dir\", \"\", \"Directory to use for the project\")\n\tcmd.Flags().StringVar(&opts.Format, \"format\", \"yaml\", \"Format the output. Values: [yaml | json]\")\n\treturn cmd\n}\n\nfunc runGenerate(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts generateOptions, containers []string) error {\n\t_, _ = fmt.Fprintln(os.Stderr, \"generate command is EXPERIMENTAL\")\n\tif len(containers) == 0 {\n\t\treturn fmt.Errorf(\"at least one container must be specified\")\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\tproject, err := backend.Generate(ctx, api.GenerateOptions{\n\t\tContainers:  containers,\n\t\tProjectName: opts.ProjectName,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar content []byte\n\tswitch opts.Format {\n\tcase \"json\":\n\t\tcontent, err = project.MarshalJSON()\n\tcase \"yaml\":\n\t\tcontent, err = project.MarshalYAML()\n\tdefault:\n\t\treturn fmt.Errorf(\"unsupported format %q\", opts.Format)\n\t}\n\tif err != nil {\n\t\treturn err\n\t}\n\tfmt.Println(string(content))\n\n\treturn nil\n}\n"
  },
  {
    "path": "cmd/compose/images.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"maps\"\n\t\"slices\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/containerd/platforms\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/docker/go-units\"\n\t\"github.com/moby/moby/client/pkg/stringid\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/cmd/formatter\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype imageOptions struct {\n\t*ProjectOptions\n\tQuiet  bool\n\tFormat string\n}\n\nfunc imagesCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := imageOptions{\n\t\tProjectOptions: p,\n\t}\n\timgCmd := &cobra.Command{\n\t\tUse:   \"images [OPTIONS] [SERVICE...]\",\n\t\tShort: \"List images used by the created containers\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runImages(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\timgCmd.Flags().StringVar(&opts.Format, \"format\", \"table\", \"Format the output. Values: [table | json]\")\n\timgCmd.Flags().BoolVarP(&opts.Quiet, \"quiet\", \"q\", false, \"Only display IDs\")\n\treturn imgCmd\n}\n\nfunc runImages(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts imageOptions, services []string) error {\n\tprojectName, err := opts.toProjectName(ctx, dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\timages, err := backend.Images(ctx, projectName, api.ImagesOptions{\n\t\tServices: services,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif opts.Quiet {\n\t\tids := []string{}\n\t\tfor _, img := range images {\n\t\t\tid := img.ID\n\t\t\tif i := strings.IndexRune(img.ID, ':'); i >= 0 {\n\t\t\t\tid = id[i+1:]\n\t\t\t}\n\t\t\tif !slices.Contains(ids, id) {\n\t\t\t\tids = append(ids, id)\n\t\t\t}\n\t\t}\n\t\tfor _, img := range ids {\n\t\t\t_, _ = fmt.Fprintln(dockerCli.Out(), img)\n\t\t}\n\t\treturn nil\n\t}\n\tif opts.Format == \"json\" {\n\n\t\ttype img struct {\n\t\t\tID            string     `json:\"ID\"`\n\t\t\tContainerName string     `json:\"ContainerName\"`\n\t\t\tRepository    string     `json:\"Repository\"`\n\t\t\tTag           string     `json:\"Tag\"`\n\t\t\tPlatform      string     `json:\"Platform\"`\n\t\t\tSize          int64      `json:\"Size\"`\n\t\t\tCreated       *time.Time `json:\"Created,omitempty\"`\n\t\t\tLastTagTime   time.Time  `json:\"LastTagTime,omitzero\"`\n\t\t}\n\t\t// Convert map to slice\n\t\tvar imageList []img\n\t\tfor ctr, i := range images {\n\t\t\tlastTagTime := i.LastTagTime\n\t\t\timageList = append(imageList, img{\n\t\t\t\tContainerName: ctr,\n\t\t\t\tID:            i.ID,\n\t\t\t\tRepository:    i.Repository,\n\t\t\t\tTag:           i.Tag,\n\t\t\t\tPlatform:      platforms.Format(i.Platform),\n\t\t\t\tSize:          i.Size,\n\t\t\t\tCreated:       i.Created,\n\t\t\t\tLastTagTime:   lastTagTime,\n\t\t\t})\n\t\t}\n\t\tjson, err := formatter.ToJSON(imageList, \"\", \"\")\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = fmt.Fprintln(dockerCli.Out(), json)\n\t\treturn err\n\t}\n\n\treturn formatter.Print(images, opts.Format, dockerCli.Out(),\n\t\tfunc(w io.Writer) {\n\t\t\tfor _, container := range slices.Sorted(maps.Keys(images)) {\n\t\t\t\timg := images[container]\n\t\t\t\tid := stringid.TruncateID(img.ID)\n\t\t\t\tsize := units.HumanSizeWithPrecision(float64(img.Size), 3)\n\t\t\t\trepo := img.Repository\n\t\t\t\tif repo == \"\" {\n\t\t\t\t\trepo = \"<none>\"\n\t\t\t\t}\n\t\t\t\ttag := img.Tag\n\t\t\t\tif tag == \"\" {\n\t\t\t\t\ttag = \"<none>\"\n\t\t\t\t}\n\t\t\t\tcreated := \"N/A\"\n\t\t\t\tif img.Created != nil {\n\t\t\t\t\tcreated = units.HumanDuration(time.Now().UTC().Sub(*img.Created)) + \" ago\"\n\t\t\t\t}\n\t\t\t\t_, _ = fmt.Fprintf(w, \"%s\\t%s\\t%s\\t%s\\t%s\\t%s\\t%s\\n\",\n\t\t\t\t\tcontainer, repo, tag, platforms.Format(img.Platform), id, size, created)\n\t\t\t}\n\t\t},\n\t\t\"CONTAINER\", \"REPOSITORY\", \"TAG\", \"PLATFORM\", \"IMAGE ID\", \"SIZE\", \"CREATED\")\n}\n"
  },
  {
    "path": "cmd/compose/kill.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\ntype killOptions struct {\n\t*ProjectOptions\n\tremoveOrphans bool\n\tsignal        string\n}\n\nfunc killCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := killOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"kill [OPTIONS] [SERVICE...]\",\n\t\tShort: \"Force stop service containers\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runKill(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\n\tflags := cmd.Flags()\n\tremoveOrphans := utils.StringToBool(os.Getenv(ComposeRemoveOrphans))\n\tflags.BoolVar(&opts.removeOrphans, \"remove-orphans\", removeOrphans, \"Remove containers for services not defined in the Compose file\")\n\tflags.StringVarP(&opts.signal, \"signal\", \"s\", \"SIGKILL\", \"SIGNAL to send to the container\")\n\n\treturn cmd\n}\n\nfunc runKill(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts killOptions, services []string) error {\n\tproject, name, err := opts.projectOrName(ctx, dockerCli, services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\terr = backend.Kill(ctx, name, api.KillOptions{\n\t\tRemoveOrphans: opts.removeOrphans,\n\t\tProject:       project,\n\t\tServices:      services,\n\t\tSignal:        opts.signal,\n\t})\n\tif errors.Is(err, api.ErrNoResources) {\n\t\t_, _ = fmt.Fprintln(stdinfo(dockerCli), \"No container to kill\")\n\t\treturn nil\n\t}\n\treturn err\n}\n"
  },
  {
    "path": "cmd/compose/list.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"regexp\"\n\t\"strings\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/docker/cli/opts\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/cmd/formatter\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype lsOptions struct {\n\tFormat string\n\tQuiet  bool\n\tAll    bool\n\tFilter opts.FilterOpt\n}\n\nfunc listCommand(dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\tlsOpts := lsOptions{Filter: opts.NewFilterOpt()}\n\tlsCmd := &cobra.Command{\n\t\tUse:   \"ls [OPTIONS]\",\n\t\tShort: \"List running compose projects\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runList(ctx, dockerCli, backendOptions, lsOpts)\n\t\t}),\n\t\tArgs:              cobra.NoArgs,\n\t\tValidArgsFunction: noCompletion(),\n\t}\n\tlsCmd.Flags().StringVar(&lsOpts.Format, \"format\", \"table\", \"Format the output. Values: [table | json]\")\n\tlsCmd.Flags().BoolVarP(&lsOpts.Quiet, \"quiet\", \"q\", false, \"Only display project names\")\n\tlsCmd.Flags().Var(&lsOpts.Filter, \"filter\", \"Filter output based on conditions provided\")\n\tlsCmd.Flags().BoolVarP(&lsOpts.All, \"all\", \"a\", false, \"Show all stopped Compose projects\")\n\n\treturn lsCmd\n}\n\nvar acceptedListFilters = map[string]bool{\n\t\"name\": true,\n}\n\n// match returns true if any of the values at key match the source string\nfunc match(filters client.Filters, field, source string) bool {\n\tif f, ok := filters[field]; ok && f[source] {\n\t\treturn true\n\t}\n\n\tfieldValues := filters[field]\n\tfor name2match := range fieldValues {\n\t\tisMatch, err := regexp.MatchString(name2match, source)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\t\tif isMatch {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\nfunc runList(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, lsOpts lsOptions) error {\n\tfilters := lsOpts.Filter.Value()\n\n\tfor filter := range filters {\n\t\tif _, ok := acceptedListFilters[filter]; !ok {\n\t\t\treturn errors.New(\"invalid filter '\" + filter + \"'\")\n\t\t}\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\tstackList, err := backend.List(ctx, api.ListOptions{All: lsOpts.All})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif len(filters) > 0 {\n\t\tvar filtered []api.Stack\n\t\tfor _, s := range stackList {\n\t\t\tif match(filters, \"name\", s.Name) {\n\t\t\t\tfiltered = append(filtered, s)\n\t\t\t}\n\t\t}\n\t\tstackList = filtered\n\t}\n\n\tif lsOpts.Quiet {\n\t\tfor _, s := range stackList {\n\t\t\t_, _ = fmt.Fprintln(dockerCli.Out(), s.Name)\n\t\t}\n\t\treturn nil\n\t}\n\n\tview := viewFromStackList(stackList)\n\treturn formatter.Print(view, lsOpts.Format, dockerCli.Out(), func(w io.Writer) {\n\t\tfor _, stack := range view {\n\t\t\t_, _ = fmt.Fprintf(w, \"%s\\t%s\\t%s\\n\", stack.Name, stack.Status, stack.ConfigFiles)\n\t\t}\n\t}, \"NAME\", \"STATUS\", \"CONFIG FILES\")\n}\n\ntype stackView struct {\n\tName        string\n\tStatus      string\n\tConfigFiles string\n}\n\nfunc viewFromStackList(stackList []api.Stack) []stackView {\n\tretList := make([]stackView, len(stackList))\n\tfor i, s := range stackList {\n\t\tretList[i] = stackView{\n\t\t\tName:        s.Name,\n\t\t\tStatus:      strings.TrimSpace(fmt.Sprintf(\"%s %s\", s.Status, s.Reason)),\n\t\t\tConfigFiles: s.ConfigFiles,\n\t\t}\n\t}\n\treturn retList\n}\n"
  },
  {
    "path": "cmd/compose/logs.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/cmd/formatter\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype logsOptions struct {\n\t*ProjectOptions\n\tcomposeOptions\n\tfollow     bool\n\tindex      int\n\ttail       string\n\tsince      string\n\tuntil      string\n\tnoColor    bool\n\tnoPrefix   bool\n\ttimestamps bool\n}\n\nfunc logsCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := logsOptions{\n\t\tProjectOptions: p,\n\t}\n\tlogsCmd := &cobra.Command{\n\t\tUse:   \"logs [OPTIONS] [SERVICE...]\",\n\t\tShort: \"View output from containers\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runLogs(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tPreRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tif opts.index > 0 && len(args) != 1 {\n\t\t\t\treturn errors.New(\"--index requires one service to be selected\")\n\t\t\t}\n\t\t\treturn nil\n\t\t},\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tflags := logsCmd.Flags()\n\tflags.BoolVarP(&opts.follow, \"follow\", \"f\", false, \"Follow log output\")\n\tflags.IntVar(&opts.index, \"index\", 0, \"index of the container if service has multiple replicas\")\n\tflags.StringVar(&opts.since, \"since\", \"\", \"Show logs since timestamp (e.g. 2013-01-02T13:23:37Z) or relative (e.g. 42m for 42 minutes)\")\n\tflags.StringVar(&opts.until, \"until\", \"\", \"Show logs before a timestamp (e.g. 2013-01-02T13:23:37Z) or relative (e.g. 42m for 42 minutes)\")\n\tflags.BoolVar(&opts.noColor, \"no-color\", false, \"Produce monochrome output\")\n\tflags.BoolVar(&opts.noPrefix, \"no-log-prefix\", false, \"Don't print prefix in logs\")\n\tflags.BoolVarP(&opts.timestamps, \"timestamps\", \"t\", false, \"Show timestamps\")\n\tflags.StringVarP(&opts.tail, \"tail\", \"n\", \"all\", \"Number of lines to show from the end of the logs for each container\")\n\treturn logsCmd\n}\n\nfunc runLogs(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts logsOptions, services []string) error {\n\tproject, name, err := opts.projectOrName(ctx, dockerCli, services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// exclude services configured to ignore output (attach: false), until explicitly selected\n\tif project != nil && len(services) == 0 {\n\t\tfor n, service := range project.Services {\n\t\t\tif service.Attach == nil || *service.Attach {\n\t\t\t\tservices = append(services, n)\n\t\t\t}\n\t\t}\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\tconsumer := formatter.NewLogConsumer(ctx, dockerCli.Out(), dockerCli.Err(), !opts.noColor, !opts.noPrefix, false)\n\treturn backend.Logs(ctx, name, consumer, api.LogOptions{\n\t\tProject:    project,\n\t\tServices:   services,\n\t\tFollow:     opts.follow,\n\t\tIndex:      opts.index,\n\t\tTail:       opts.tail,\n\t\tSince:      opts.since,\n\t\tUntil:      opts.until,\n\t\tTimestamps: opts.timestamps,\n\t})\n}\n\nvar _ api.LogConsumer = &logConsumer{}\n\ntype logConsumer struct {\n\tevents api.EventProcessor\n}\n\nfunc (l logConsumer) Log(containerName, message string) {\n\tl.events.On(api.Resource{\n\t\tID:   containerName,\n\t\tText: message,\n\t})\n}\n\nfunc (l logConsumer) Err(containerName, message string) {\n\tl.events.On(api.Resource{\n\t\tID:     containerName,\n\t\tStatus: api.Error,\n\t\tText:   message,\n\t})\n}\n\nfunc (l logConsumer) Status(containerName, message string) {\n\tl.events.On(api.Resource{\n\t\tID:     containerName,\n\t\tStatus: api.Error,\n\t\tText:   message,\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/options.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"slices\"\n\t\"sort\"\n\t\"strings\"\n\t\"text/tabwriter\"\n\n\t\"github.com/compose-spec/compose-go/v2/cli\"\n\t\"github.com/compose-spec/compose-go/v2/template\"\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/command\"\n\n\t\"github.com/docker/compose/v5/cmd/display\"\n\t\"github.com/docker/compose/v5/cmd/prompt\"\n\t\"github.com/docker/compose/v5/internal/tracing\"\n)\n\nfunc applyPlatforms(project *types.Project, buildForSinglePlatform bool) error {\n\tdefaultPlatform := project.Environment[\"DOCKER_DEFAULT_PLATFORM\"]\n\tfor name, service := range project.Services {\n\t\tif service.Build == nil {\n\t\t\tcontinue\n\t\t}\n\n\t\t// default platform only applies if the service doesn't specify\n\t\tif defaultPlatform != \"\" && service.Platform == \"\" {\n\t\t\tif len(service.Build.Platforms) > 0 && !slices.Contains(service.Build.Platforms, defaultPlatform) {\n\t\t\t\treturn fmt.Errorf(\"service %q build.platforms does not support value set by DOCKER_DEFAULT_PLATFORM: %s\", name, defaultPlatform)\n\t\t\t}\n\t\t\tservice.Platform = defaultPlatform\n\t\t}\n\n\t\tif service.Platform != \"\" {\n\t\t\tif len(service.Build.Platforms) > 0 {\n\t\t\t\tif !slices.Contains(service.Build.Platforms, service.Platform) {\n\t\t\t\t\treturn fmt.Errorf(\"service %q build configuration does not support platform: %s\", name, service.Platform)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif buildForSinglePlatform || len(service.Build.Platforms) == 0 {\n\t\t\t\t// if we're building for a single platform, we want to build for the platform we'll use to run the image\n\t\t\t\t// similarly, if no build platforms were explicitly specified, it makes sense to build for the platform\n\t\t\t\t// the image is designed for rather than allowing the builder to infer the platform\n\t\t\t\tservice.Build.Platforms = []string{service.Platform}\n\t\t\t}\n\t\t}\n\n\t\t// services can specify that they should be built for multiple platforms, which can be used\n\t\t// with `docker compose build` to produce a multi-arch image\n\t\t// other cases, such as `up` and `run`, need a single architecture to actually run\n\t\t// if there is only a single platform present (which might have been inferred\n\t\t// from service.Platform above), it will be used, even if it requires emulation.\n\t\t// if there's more than one platform, then the list is cleared so that the builder\n\t\t// can decide.\n\t\t// TODO(milas): there's no validation that the platform the builder will pick is actually one\n\t\t// \tof the supported platforms from the build definition\n\t\t// \te.g. `build.platforms: [linux/arm64, linux/amd64]` on a `linux/ppc64` machine would build\n\t\t// \tfor `linux/ppc64` instead of returning an error that it's not a valid platform for the service.\n\t\tif buildForSinglePlatform && len(service.Build.Platforms) > 1 {\n\t\t\t// empty indicates that the builder gets to decide\n\t\t\tservice.Build.Platforms = nil\n\t\t}\n\t\tproject.Services[name] = service\n\t}\n\treturn nil\n}\n\n// isRemoteConfig checks if the main compose file is from a remote source (OCI or Git)\nfunc isRemoteConfig(dockerCli command.Cli, options buildOptions) bool {\n\tif len(options.ConfigPaths) == 0 {\n\t\treturn false\n\t}\n\tremoteLoaders := options.remoteLoaders(dockerCli)\n\tfor _, loader := range remoteLoaders {\n\t\tif loader.Accept(options.ConfigPaths[0]) {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\n// checksForRemoteStack handles environment variable prompts for remote configurations\nfunc checksForRemoteStack(ctx context.Context, dockerCli command.Cli, project *types.Project, options buildOptions, assumeYes bool, cmdEnvs []string) error {\n\tif !isRemoteConfig(dockerCli, options) {\n\t\treturn nil\n\t}\n\tif metrics, ok := ctx.Value(tracing.MetricsKey{}).(tracing.Metrics); ok && metrics.CountIncludesRemote > 0 {\n\t\tif err := confirmRemoteIncludes(dockerCli, options, assumeYes); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tdisplayLocationRemoteStack(dockerCli, project, options)\n\treturn promptForInterpolatedVariables(ctx, dockerCli, options.ProjectOptions, assumeYes, cmdEnvs)\n}\n\n// Prepare the values map and collect all variables info\ntype varInfo struct {\n\tname         string\n\tvalue        string\n\tsource       string\n\trequired     bool\n\tdefaultValue string\n}\n\n// promptForInterpolatedVariables displays all variables and their values at once,\n// then prompts for confirmation\nfunc promptForInterpolatedVariables(ctx context.Context, dockerCli command.Cli, projectOptions *ProjectOptions, assumeYes bool, cmdEnvs []string) error {\n\tif assumeYes {\n\t\treturn nil\n\t}\n\n\tvarsInfo, noVariables, err := extractInterpolationVariablesFromModel(ctx, dockerCli, projectOptions, cmdEnvs)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif noVariables {\n\t\treturn nil\n\t}\n\n\tdisplayInterpolationVariables(dockerCli.Out(), varsInfo)\n\n\t// Prompt for confirmation\n\tuserInput := prompt.NewPrompt(dockerCli.In(), dockerCli.Out())\n\tmsg := \"\\nDo you want to proceed with these variables? [Y/n]: \"\n\tconfirmed, err := userInput.Confirm(msg, true)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif !confirmed {\n\t\treturn fmt.Errorf(\"operation cancelled by user\")\n\t}\n\n\treturn nil\n}\n\nfunc extractInterpolationVariablesFromModel(ctx context.Context, dockerCli command.Cli, projectOptions *ProjectOptions, cmdEnvs []string) ([]varInfo, bool, error) {\n\tcmdEnvMap := extractEnvCLIDefined(cmdEnvs)\n\n\t// Create a model without interpolation to extract variables\n\topts := configOptions{\n\t\tnoInterpolate:  true,\n\t\tProjectOptions: projectOptions,\n\t}\n\n\tmodel, err := opts.ToModel(ctx, dockerCli, nil, cli.WithoutEnvironmentResolution)\n\tif err != nil {\n\t\treturn nil, false, err\n\t}\n\n\t// Extract variables that need interpolation\n\tvariables := template.ExtractVariables(model, template.DefaultPattern)\n\tif len(variables) == 0 {\n\t\treturn nil, true, nil\n\t}\n\n\tvar varsInfo []varInfo\n\tproposedValues := make(map[string]string)\n\n\tfor name, variable := range variables {\n\t\tinfo := varInfo{\n\t\t\tname:         name,\n\t\t\trequired:     variable.Required,\n\t\t\tdefaultValue: variable.DefaultValue,\n\t\t}\n\n\t\t// Determine value and source based on priority\n\t\tif value, exists := cmdEnvMap[name]; exists {\n\t\t\tinfo.value = value\n\t\t\tinfo.source = \"command-line\"\n\t\t\tproposedValues[name] = value\n\t\t} else if value, exists := os.LookupEnv(name); exists {\n\t\t\tinfo.value = value\n\t\t\tinfo.source = \"environment\"\n\t\t\tproposedValues[name] = value\n\t\t} else if variable.DefaultValue != \"\" {\n\t\t\tinfo.value = variable.DefaultValue\n\t\t\tinfo.source = \"compose file\"\n\t\t\tproposedValues[name] = variable.DefaultValue\n\t\t} else {\n\t\t\tinfo.value = \"<unset>\"\n\t\t\tinfo.source = \"none\"\n\t\t}\n\n\t\tvarsInfo = append(varsInfo, info)\n\t}\n\treturn varsInfo, false, nil\n}\n\nfunc extractEnvCLIDefined(cmdEnvs []string) map[string]string {\n\t// Parse command-line environment variables\n\tcmdEnvMap := make(map[string]string)\n\tfor _, env := range cmdEnvs {\n\t\tkey, val, ok := strings.Cut(env, \"=\")\n\t\tif ok {\n\t\t\tcmdEnvMap[key] = val\n\t\t}\n\t}\n\treturn cmdEnvMap\n}\n\nfunc displayInterpolationVariables(writer io.Writer, varsInfo []varInfo) {\n\t// Display all variables in a table format\n\t_, _ = fmt.Fprintln(writer, \"\\nFound the following variables in configuration:\")\n\n\tw := tabwriter.NewWriter(writer, 0, 0, 3, ' ', 0)\n\t_, _ = fmt.Fprintln(w, \"VARIABLE\\tVALUE\\tSOURCE\\tREQUIRED\\tDEFAULT\")\n\tsort.Slice(varsInfo, func(a, b int) bool {\n\t\treturn varsInfo[a].name < varsInfo[b].name\n\t})\n\tfor _, info := range varsInfo {\n\t\trequired := \"no\"\n\t\tif info.required {\n\t\t\trequired = \"yes\"\n\t\t}\n\t\t_, _ = fmt.Fprintf(w, \"%s\\t%s\\t%s\\t%s\\t%s\\n\",\n\t\t\tinfo.name,\n\t\t\tinfo.value,\n\t\t\tinfo.source,\n\t\t\trequired,\n\t\t\tinfo.defaultValue,\n\t\t)\n\t}\n\t_ = w.Flush()\n}\n\nfunc displayLocationRemoteStack(dockerCli command.Cli, project *types.Project, options buildOptions) {\n\tmainComposeFile := options.ProjectOptions.ConfigPaths[0] //nolint:staticcheck\n\tif display.Mode != display.ModeQuiet && display.Mode != display.ModeJSON {\n\t\t_, _ = fmt.Fprintf(dockerCli.Out(), \"Your compose stack %q is stored in %q\\n\", mainComposeFile, project.WorkingDir)\n\t}\n}\n\nfunc confirmRemoteIncludes(dockerCli command.Cli, options buildOptions, assumeYes bool) error {\n\tif assumeYes {\n\t\treturn nil\n\t}\n\n\tvar remoteIncludes []string\n\tremoteLoaders := options.ProjectOptions.remoteLoaders(dockerCli) //nolint:staticcheck\n\tfor _, cf := range options.ProjectOptions.ConfigPaths {          //nolint:staticcheck\n\t\tfor _, loader := range remoteLoaders {\n\t\t\tif loader.Accept(cf) {\n\t\t\t\tremoteIncludes = append(remoteIncludes, cf)\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\n\tif len(remoteIncludes) == 0 {\n\t\treturn nil\n\t}\n\n\t_, _ = fmt.Fprintln(dockerCli.Out(), \"\\nWarning: This Compose project includes files from remote sources:\")\n\tfor _, include := range remoteIncludes {\n\t\t_, _ = fmt.Fprintf(dockerCli.Out(), \"  - %s\\n\", include)\n\t}\n\t_, _ = fmt.Fprintln(dockerCli.Out(), \"\\nRemote includes could potentially be malicious. Make sure you trust the source.\")\n\n\tmsg := \"Do you want to continue? [y/N]: \"\n\tconfirmed, err := prompt.NewPrompt(dockerCli.In(), dockerCli.Out()).Confirm(msg, false)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif !confirmed {\n\t\treturn fmt.Errorf(\"operation cancelled by user\")\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "cmd/compose/options_test.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/streams\"\n\t\"github.com/stretchr/testify/require\"\n\t\"go.uber.org/mock/gomock\"\n\n\t\"github.com/docker/compose/v5/pkg/mocks\"\n)\n\nfunc TestApplyPlatforms_InferFromRuntime(t *testing.T) {\n\tmakeProject := func() *types.Project {\n\t\treturn &types.Project{\n\t\t\tServices: types.Services{\n\t\t\t\t\"test\": {\n\t\t\t\t\tName:  \"test\",\n\t\t\t\t\tImage: \"foo\",\n\t\t\t\t\tBuild: &types.BuildConfig{\n\t\t\t\t\t\tContext: \".\",\n\t\t\t\t\t\tPlatforms: []string{\n\t\t\t\t\t\t\t\"linux/amd64\",\n\t\t\t\t\t\t\t\"linux/arm64\",\n\t\t\t\t\t\t\t\"alice/32\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tPlatform: \"alice/32\",\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t}\n\n\tt.Run(\"SinglePlatform\", func(t *testing.T) {\n\t\tproject := makeProject()\n\t\trequire.NoError(t, applyPlatforms(project, true))\n\t\trequire.EqualValues(t, []string{\"alice/32\"}, project.Services[\"test\"].Build.Platforms)\n\t})\n\n\tt.Run(\"MultiPlatform\", func(t *testing.T) {\n\t\tproject := makeProject()\n\t\trequire.NoError(t, applyPlatforms(project, false))\n\t\trequire.EqualValues(t, []string{\"linux/amd64\", \"linux/arm64\", \"alice/32\"},\n\t\t\tproject.Services[\"test\"].Build.Platforms)\n\t})\n}\n\nfunc TestApplyPlatforms_DockerDefaultPlatform(t *testing.T) {\n\tmakeProject := func() *types.Project {\n\t\treturn &types.Project{\n\t\t\tEnvironment: map[string]string{\n\t\t\t\t\"DOCKER_DEFAULT_PLATFORM\": \"linux/amd64\",\n\t\t\t},\n\t\t\tServices: types.Services{\n\t\t\t\t\"test\": {\n\t\t\t\t\tName:  \"test\",\n\t\t\t\t\tImage: \"foo\",\n\t\t\t\t\tBuild: &types.BuildConfig{\n\t\t\t\t\t\tContext: \".\",\n\t\t\t\t\t\tPlatforms: []string{\n\t\t\t\t\t\t\t\"linux/amd64\",\n\t\t\t\t\t\t\t\"linux/arm64\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t}\n\n\tt.Run(\"SinglePlatform\", func(t *testing.T) {\n\t\tproject := makeProject()\n\t\trequire.NoError(t, applyPlatforms(project, true))\n\t\trequire.EqualValues(t, []string{\"linux/amd64\"}, project.Services[\"test\"].Build.Platforms)\n\t})\n\n\tt.Run(\"MultiPlatform\", func(t *testing.T) {\n\t\tproject := makeProject()\n\t\trequire.NoError(t, applyPlatforms(project, false))\n\t\trequire.EqualValues(t, []string{\"linux/amd64\", \"linux/arm64\"},\n\t\t\tproject.Services[\"test\"].Build.Platforms)\n\t})\n}\n\nfunc TestApplyPlatforms_UnsupportedPlatform(t *testing.T) {\n\tmakeProject := func() *types.Project {\n\t\treturn &types.Project{\n\t\t\tEnvironment: map[string]string{\n\t\t\t\t\"DOCKER_DEFAULT_PLATFORM\": \"commodore/64\",\n\t\t\t},\n\t\t\tServices: types.Services{\n\t\t\t\t\"test\": {\n\t\t\t\t\tName:  \"test\",\n\t\t\t\t\tImage: \"foo\",\n\t\t\t\t\tBuild: &types.BuildConfig{\n\t\t\t\t\t\tContext: \".\",\n\t\t\t\t\t\tPlatforms: []string{\n\t\t\t\t\t\t\t\"linux/amd64\",\n\t\t\t\t\t\t\t\"linux/arm64\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t}\n\n\tt.Run(\"SinglePlatform\", func(t *testing.T) {\n\t\tproject := makeProject()\n\t\trequire.EqualError(t, applyPlatforms(project, true),\n\t\t\t`service \"test\" build.platforms does not support value set by DOCKER_DEFAULT_PLATFORM: commodore/64`)\n\t})\n\n\tt.Run(\"MultiPlatform\", func(t *testing.T) {\n\t\tproject := makeProject()\n\t\trequire.EqualError(t, applyPlatforms(project, false),\n\t\t\t`service \"test\" build.platforms does not support value set by DOCKER_DEFAULT_PLATFORM: commodore/64`)\n\t})\n}\n\nfunc TestIsRemoteConfig(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\tcli := mocks.NewMockCli(ctrl)\n\n\ttests := []struct {\n\t\tname        string\n\t\tconfigPaths []string\n\t\twant        bool\n\t}{\n\t\t{\n\t\t\tname:        \"empty config paths\",\n\t\t\tconfigPaths: []string{},\n\t\t\twant:        false,\n\t\t},\n\t\t{\n\t\t\tname:        \"local file\",\n\t\t\tconfigPaths: []string{\"docker-compose.yaml\"},\n\t\t\twant:        false,\n\t\t},\n\t\t{\n\t\t\tname:        \"OCI reference\",\n\t\t\tconfigPaths: []string{\"oci://registry.example.com/stack:latest\"},\n\t\t\twant:        true,\n\t\t},\n\t\t{\n\t\t\tname:        \"GIT reference\",\n\t\t\tconfigPaths: []string{\"git://github.com/user/repo.git\"},\n\t\t\twant:        true,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\topts := buildOptions{\n\t\t\t\tProjectOptions: &ProjectOptions{\n\t\t\t\t\tConfigPaths: tt.configPaths,\n\t\t\t\t},\n\t\t\t}\n\t\t\tgot := isRemoteConfig(cli, opts)\n\t\t\trequire.Equal(t, tt.want, got)\n\t\t})\n\t}\n}\n\nfunc TestDisplayLocationRemoteStack(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\tcli := mocks.NewMockCli(ctrl)\n\n\tbuf := new(bytes.Buffer)\n\tcli.EXPECT().Out().Return(streams.NewOut(buf)).AnyTimes()\n\n\tproject := &types.Project{\n\t\tName:       \"test-project\",\n\t\tWorkingDir: \"/tmp/test\",\n\t}\n\n\toptions := buildOptions{\n\t\tProjectOptions: &ProjectOptions{\n\t\t\tConfigPaths: []string{\"oci://registry.example.com/stack:latest\"},\n\t\t},\n\t}\n\n\tdisplayLocationRemoteStack(cli, project, options)\n\n\toutput := buf.String()\n\trequire.Equal(t, output, fmt.Sprintf(\"Your compose stack %q is stored in %q\\n\", \"oci://registry.example.com/stack:latest\", \"/tmp/test\"))\n}\n\nfunc TestDisplayInterpolationVariables(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\ttmpDir := t.TempDir()\n\n\t// Create a temporary compose file\n\tcomposeContent := `\nservices:\n  app:\n    image: nginx\n    environment:\n      - TEST_VAR=${TEST_VAR:?required}  # required with default\n      - API_KEY=${API_KEY:?}            # required without default\n      - DEBUG=${DEBUG:-true}            # optional with default\n      - UNSET_VAR                       # optional without default\n`\n\tcomposePath := filepath.Join(tmpDir, \"docker-compose.yml\")\n\trequire.NoError(t, os.WriteFile(composePath, []byte(composeContent), 0o644))\n\n\tbuf := new(bytes.Buffer)\n\tcli := mocks.NewMockCli(ctrl)\n\tcli.EXPECT().Out().Return(streams.NewOut(buf)).AnyTimes()\n\n\t// Create ProjectOptions with the temporary compose file\n\tprojectOptions := &ProjectOptions{\n\t\tConfigPaths: []string{composePath},\n\t}\n\n\t// Set up the context with necessary environment variables\n\tt.Setenv(\"TEST_VAR\", \"test-value\")\n\tt.Setenv(\"API_KEY\", \"123456\")\n\n\t// Extract variables from the model\n\tinfo, noVariables, err := extractInterpolationVariablesFromModel(t.Context(), cli, projectOptions, []string{})\n\trequire.NoError(t, err)\n\trequire.False(t, noVariables)\n\n\t// Display the variables\n\tdisplayInterpolationVariables(cli.Out(), info)\n\n\t// Expected output format with proper spacing\n\texpected := \"\\nFound the following variables in configuration:\\n\" +\n\t\t\"VARIABLE   VALUE       SOURCE        REQUIRED   DEFAULT\\n\" +\n\t\t\"API_KEY    123456      environment   yes         \\n\" +\n\t\t\"DEBUG      true       compose file  no         true\\n\" +\n\t\t\"TEST_VAR   test-value  environment   yes         \\n\"\n\n\t// Normalize spaces and newlines for comparison\n\tnormalizeSpaces := func(s string) string {\n\t\t// Replace multiple spaces with a single space\n\t\ts = strings.Join(strings.Fields(strings.TrimSpace(s)), \" \")\n\t\treturn s\n\t}\n\n\tactualOutput := buf.String()\n\n\t// Compare normalized strings\n\trequire.Equal(t,\n\t\tnormalizeSpaces(expected),\n\t\tnormalizeSpaces(actualOutput),\n\t\t\"\\nExpected:\\n%s\\nGot:\\n%s\", expected, actualOutput)\n}\n\nfunc TestConfirmRemoteIncludes(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\tcli := mocks.NewMockCli(ctrl)\n\n\ttests := []struct {\n\t\tname       string\n\t\topts       buildOptions\n\t\tassumeYes  bool\n\t\tuserInput  string\n\t\twantErr    bool\n\t\terrMessage string\n\t\twantPrompt bool\n\t\twantOutput string\n\t}{\n\t\t{\n\t\t\tname: \"no remote includes\",\n\t\t\topts: buildOptions{\n\t\t\t\tProjectOptions: &ProjectOptions{\n\t\t\t\t\tConfigPaths: []string{\n\t\t\t\t\t\t\"docker-compose.yaml\",\n\t\t\t\t\t\t\"./local/path/compose.yaml\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tassumeYes:  false,\n\t\t\twantErr:    false,\n\t\t\twantPrompt: false,\n\t\t},\n\t\t{\n\t\t\tname: \"assume yes with remote includes\",\n\t\t\topts: buildOptions{\n\t\t\t\tProjectOptions: &ProjectOptions{\n\t\t\t\t\tConfigPaths: []string{\n\t\t\t\t\t\t\"oci://registry.example.com/stack:latest\",\n\t\t\t\t\t\t\"git://github.com/user/repo.git\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tassumeYes:  true,\n\t\t\twantErr:    false,\n\t\t\twantPrompt: false,\n\t\t},\n\t\t{\n\t\t\tname: \"user confirms remote includes\",\n\t\t\topts: buildOptions{\n\t\t\t\tProjectOptions: &ProjectOptions{\n\t\t\t\t\tConfigPaths: []string{\n\t\t\t\t\t\t\"oci://registry.example.com/stack:latest\",\n\t\t\t\t\t\t\"git://github.com/user/repo.git\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tassumeYes:  false,\n\t\t\tuserInput:  \"y\\n\",\n\t\t\twantErr:    false,\n\t\t\twantPrompt: true,\n\t\t\twantOutput: \"\\nWarning: This Compose project includes files from remote sources:\\n\" +\n\t\t\t\t\"  - oci://registry.example.com/stack:latest\\n\" +\n\t\t\t\t\"  - git://github.com/user/repo.git\\n\" +\n\t\t\t\t\"\\nRemote includes could potentially be malicious. Make sure you trust the source.\\n\" +\n\t\t\t\t\"Do you want to continue? [y/N]: \",\n\t\t},\n\t\t{\n\t\t\tname: \"user rejects remote includes\",\n\t\t\topts: buildOptions{\n\t\t\t\tProjectOptions: &ProjectOptions{\n\t\t\t\t\tConfigPaths: []string{\n\t\t\t\t\t\t\"oci://registry.example.com/stack:latest\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tassumeYes:  false,\n\t\t\tuserInput:  \"n\\n\",\n\t\t\twantErr:    true,\n\t\t\terrMessage: \"operation cancelled by user\",\n\t\t\twantPrompt: true,\n\t\t\twantOutput: \"\\nWarning: This Compose project includes files from remote sources:\\n\" +\n\t\t\t\t\"  - oci://registry.example.com/stack:latest\\n\" +\n\t\t\t\t\"\\nRemote includes could potentially be malicious. Make sure you trust the source.\\n\" +\n\t\t\t\t\"Do you want to continue? [y/N]: \",\n\t\t},\n\t}\n\n\tbuf := new(bytes.Buffer)\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tcli.EXPECT().Out().Return(streams.NewOut(buf)).AnyTimes()\n\n\t\t\tif tt.wantPrompt {\n\t\t\t\tinbuf := io.NopCloser(bytes.NewBufferString(tt.userInput))\n\t\t\t\tcli.EXPECT().In().Return(streams.NewIn(inbuf)).AnyTimes()\n\t\t\t}\n\n\t\t\terr := confirmRemoteIncludes(cli, tt.opts, tt.assumeYes)\n\n\t\t\tif tt.wantErr {\n\t\t\t\trequire.Error(t, err)\n\t\t\t\trequire.Equal(t, tt.errMessage, err.Error())\n\t\t\t} else {\n\t\t\t\trequire.NoError(t, err)\n\t\t\t}\n\n\t\t\tif tt.wantOutput != \"\" {\n\t\t\t\trequire.Equal(t, tt.wantOutput, buf.String())\n\t\t\t}\n\t\t\tbuf.Reset()\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "cmd/compose/pause.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype pauseOptions struct {\n\t*ProjectOptions\n}\n\nfunc pauseCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := pauseOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"pause [SERVICE...]\",\n\t\tShort: \"Pause services\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runPause(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\treturn cmd\n}\n\nfunc runPause(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts pauseOptions, services []string) error {\n\tproject, name, err := opts.projectOrName(ctx, dockerCli, services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn backend.Pause(ctx, name, api.PauseOptions{\n\t\tServices: services,\n\t\tProject:  project,\n\t})\n}\n\ntype unpauseOptions struct {\n\t*ProjectOptions\n}\n\nfunc unpauseCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := unpauseOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"unpause [SERVICE...]\",\n\t\tShort: \"Unpause services\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runUnPause(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\treturn cmd\n}\n\nfunc runUnPause(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts unpauseOptions, services []string) error {\n\tproject, name, err := opts.projectOrName(ctx, dockerCli, services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn backend.UnPause(ctx, name, api.PauseOptions{\n\t\tServices: services,\n\t\tProject:  project,\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/port.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype portOptions struct {\n\t*ProjectOptions\n\tport     uint16\n\tprotocol string\n\tindex    int\n}\n\nfunc portCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := portOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"port [OPTIONS] SERVICE PRIVATE_PORT\",\n\t\tShort: \"Print the public port for a port binding\",\n\t\tArgs:  cobra.MinimumNArgs(2),\n\t\tPreRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\tport, err := strconv.ParseUint(args[1], 10, 16)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\topts.port = uint16(port)\n\t\t\topts.protocol = strings.ToLower(opts.protocol)\n\t\t\treturn nil\n\t\t}),\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runPort(ctx, dockerCli, backendOptions, opts, args[0])\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tcmd.Flags().StringVar(&opts.protocol, \"protocol\", \"tcp\", \"tcp or udp\")\n\tcmd.Flags().IntVar(&opts.index, \"index\", 0, \"Index of the container if service has multiple replicas\")\n\treturn cmd\n}\n\nfunc runPort(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts portOptions, service string) error {\n\tprojectName, err := opts.toProjectName(ctx, dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\tip, port, err := backend.Port(ctx, projectName, service, opts.port, api.PortOptions{\n\t\tProtocol: opts.protocol,\n\t\tIndex:    opts.index,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t_, _ = fmt.Fprintf(dockerCli.Out(), \"%s:%d\\n\", ip, port)\n\treturn nil\n}\n"
  },
  {
    "path": "cmd/compose/ps.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"slices\"\n\t\"sort\"\n\t\"strings\"\n\n\t\"github.com/docker/cli/cli/command\"\n\tcliformatter \"github.com/docker/cli/cli/command/formatter\"\n\tcliflags \"github.com/docker/cli/cli/flags\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/cmd/formatter\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype psOptions struct {\n\t*ProjectOptions\n\tFormat   string\n\tAll      bool\n\tQuiet    bool\n\tServices bool\n\tFilter   string\n\tStatus   []string\n\tnoTrunc  bool\n\tOrphans  bool\n}\n\nfunc (p *psOptions) parseFilter() error {\n\tif p.Filter == \"\" {\n\t\treturn nil\n\t}\n\tkey, val, ok := strings.Cut(p.Filter, \"=\")\n\tif !ok {\n\t\treturn errors.New(\"arguments to --filter should be in form KEY=VAL\")\n\t}\n\tswitch key {\n\tcase \"status\":\n\t\tp.Status = append(p.Status, val)\n\t\treturn nil\n\tcase \"source\":\n\t\treturn api.ErrNotImplemented\n\tdefault:\n\t\treturn fmt.Errorf(\"unknown filter %s\", key)\n\t}\n}\n\nfunc psCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := psOptions{\n\t\tProjectOptions: p,\n\t}\n\tpsCmd := &cobra.Command{\n\t\tUse:   \"ps [OPTIONS] [SERVICE...]\",\n\t\tShort: \"List containers\",\n\t\tPreRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\treturn opts.parseFilter()\n\t\t},\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runPs(ctx, dockerCli, backendOptions, args, opts)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tflags := psCmd.Flags()\n\tflags.StringVar(&opts.Format, \"format\", \"table\", cliflags.FormatHelp)\n\tflags.StringVar(&opts.Filter, \"filter\", \"\", \"Filter services by a property (supported filters: status)\")\n\tflags.StringArrayVar(&opts.Status, \"status\", []string{}, \"Filter services by status. Values: [paused | restarting | removing | running | dead | created | exited]\")\n\tflags.BoolVarP(&opts.Quiet, \"quiet\", \"q\", false, \"Only display IDs\")\n\tflags.BoolVar(&opts.Services, \"services\", false, \"Display services\")\n\tflags.BoolVar(&opts.Orphans, \"orphans\", true, \"Include orphaned services (not declared by project)\")\n\tflags.BoolVarP(&opts.All, \"all\", \"a\", false, \"Show all stopped containers (including those created by the run command)\")\n\tflags.BoolVar(&opts.noTrunc, \"no-trunc\", false, \"Don't truncate output\")\n\treturn psCmd\n}\n\nfunc runPs(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, services []string, opts psOptions) error { //nolint:gocyclo\n\tproject, name, err := opts.projectOrName(ctx, dockerCli, services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif project != nil {\n\t\tnames := project.ServiceNames()\n\t\tif len(services) > 0 {\n\t\t\tfor _, service := range services {\n\t\t\t\tif !slices.Contains(names, service) {\n\t\t\t\t\treturn fmt.Errorf(\"no such service: %s\", service)\n\t\t\t\t}\n\t\t\t}\n\t\t} else if !opts.Orphans {\n\t\t\t// until user asks to list orphaned services, we only include those declared in project\n\t\t\tservices = names\n\t\t}\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\tcontainers, err := backend.Ps(ctx, name, api.PsOptions{\n\t\tProject:  project,\n\t\tAll:      opts.All || len(opts.Status) != 0,\n\t\tServices: services,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif len(opts.Status) != 0 {\n\t\tcontainers = filterByStatus(containers, opts.Status)\n\t}\n\n\tsort.Slice(containers, func(i, j int) bool {\n\t\treturn containers[i].Name < containers[j].Name\n\t})\n\n\tif opts.Quiet {\n\t\tfor _, c := range containers {\n\t\t\t_, _ = fmt.Fprintln(dockerCli.Out(), c.ID)\n\t\t}\n\t\treturn nil\n\t}\n\n\tif opts.Services {\n\t\tservices := []string{}\n\t\tfor _, c := range containers {\n\t\t\ts := c.Service\n\t\t\tif !slices.Contains(services, s) {\n\t\t\t\tservices = append(services, s)\n\t\t\t}\n\t\t}\n\t\t_, _ = fmt.Fprintln(dockerCli.Out(), strings.Join(services, \"\\n\"))\n\t\treturn nil\n\t}\n\n\tif opts.Format == \"\" {\n\t\topts.Format = dockerCli.ConfigFile().PsFormat\n\t}\n\n\tcontainerCtx := cliformatter.Context{\n\t\tOutput: dockerCli.Out(),\n\t\tFormat: formatter.NewContainerFormat(opts.Format, opts.Quiet, false),\n\t\tTrunc:  !opts.noTrunc,\n\t}\n\treturn formatter.ContainerWrite(containerCtx, containers)\n}\n\nfunc filterByStatus(containers []api.ContainerSummary, statuses []string) []api.ContainerSummary {\n\tvar filtered []api.ContainerSummary\n\tfor _, c := range containers {\n\t\tif slices.Contains(statuses, string(c.State)) {\n\t\t\tfiltered = append(filtered, c)\n\t\t}\n\t}\n\treturn filtered\n}\n"
  },
  {
    "path": "cmd/compose/publish.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\n\t\"github.com/docker/cli/cli\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/spf13/cobra\"\n\t\"github.com/spf13/pflag\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype publishOptions struct {\n\t*ProjectOptions\n\tresolveImageDigests bool\n\tociVersion          string\n\twithEnvironment     bool\n\tassumeYes           bool\n\tapp                 bool\n\tinsecureRegistry    bool\n}\n\nfunc publishCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := publishOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"publish [OPTIONS] REPOSITORY[:TAG]\",\n\t\tShort: \"Publish compose application\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runPublish(ctx, dockerCli, backendOptions, opts, args[0])\n\t\t}),\n\t\tArgs: cli.ExactArgs(1),\n\t}\n\tflags := cmd.Flags()\n\tflags.BoolVar(&opts.resolveImageDigests, \"resolve-image-digests\", false, \"Pin image tags to digests\")\n\tflags.StringVar(&opts.ociVersion, \"oci-version\", \"\", \"OCI image/artifact specification version (automatically determined by default)\")\n\tflags.BoolVar(&opts.withEnvironment, \"with-env\", false, \"Include environment variables in the published OCI artifact\")\n\tflags.BoolVarP(&opts.assumeYes, \"yes\", \"y\", false, `Assume \"yes\" as answer to all prompts`)\n\tflags.BoolVar(&opts.app, \"app\", false, \"Published compose application (includes referenced images)\")\n\tflags.BoolVar(&opts.insecureRegistry, \"insecure-registry\", false, \"Use insecure registry\")\n\tflags.SetNormalizeFunc(func(f *pflag.FlagSet, name string) pflag.NormalizedName {\n\t\t// assumeYes was introduced by mistake as `--y`\n\t\tif name == \"y\" {\n\t\t\tlogrus.Warn(\"--y is deprecated, please use --yes instead\")\n\t\t\tname = \"yes\"\n\t\t}\n\t\treturn pflag.NormalizedName(name)\n\t})\n\t// Should **only** be used for testing purpose, we don't want to promote use of insecure registries\n\t_ = flags.MarkHidden(\"insecure-registry\")\n\n\treturn cmd\n}\n\nfunc runPublish(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts publishOptions, repository string) error {\n\tif opts.assumeYes {\n\t\tbackendOptions.Options = append(backendOptions.Options, compose.WithPrompt(compose.AlwaysOkPrompt()))\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, metrics, err := opts.ToProject(ctx, dockerCli, backend, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif metrics.CountIncludesLocal > 0 {\n\t\treturn errors.New(\"cannot publish compose file with local includes\")\n\t}\n\n\treturn backend.Publish(ctx, project, repository, api.PublishOptions{\n\t\tResolveImageDigests: opts.resolveImageDigests || opts.app,\n\t\tApplication:         opts.app,\n\t\tOCIVersion:          api.OCIVersion(opts.ociVersion),\n\t\tWithEnvironment:     opts.withEnvironment,\n\t\tInsecureRegistry:    opts.insecureRegistry,\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/pull.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/compose-spec/compose-go/v2/cli\"\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/morikuni/aec\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype pullOptions struct {\n\t*ProjectOptions\n\tcomposeOptions\n\tquiet              bool\n\tparallel           bool\n\tnoParallel         bool\n\tincludeDeps        bool\n\tignorePullFailures bool\n\tnoBuildable        bool\n\tpolicy             string\n}\n\nfunc pullCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := pullOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"pull [OPTIONS] [SERVICE...]\",\n\t\tShort: \"Pull service images\",\n\t\tPreRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tif cmd.Flags().Changed(\"no-parallel\") {\n\t\t\t\tfmt.Fprint(os.Stderr, aec.Apply(\"option '--no-parallel' is DEPRECATED and will be ignored.\\n\", aec.RedF))\n\t\t\t}\n\t\t\tif cmd.Flags().Changed(\"parallel\") {\n\t\t\t\tfmt.Fprint(os.Stderr, aec.Apply(\"option '--parallel' is DEPRECATED and will be ignored.\\n\", aec.RedF))\n\t\t\t}\n\t\t\treturn nil\n\t\t},\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runPull(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tflags := cmd.Flags()\n\tflags.BoolVarP(&opts.quiet, \"quiet\", \"q\", false, \"Pull without printing progress information\")\n\tcmd.Flags().BoolVar(&opts.includeDeps, \"include-deps\", false, \"Also pull services declared as dependencies\")\n\tcmd.Flags().BoolVar(&opts.parallel, \"parallel\", true, \"DEPRECATED pull multiple images in parallel\")\n\tflags.MarkHidden(\"parallel\") //nolint:errcheck\n\tcmd.Flags().BoolVar(&opts.noParallel, \"no-parallel\", true, \"DEPRECATED disable parallel pulling\")\n\tflags.MarkHidden(\"no-parallel\") //nolint:errcheck\n\tcmd.Flags().BoolVar(&opts.ignorePullFailures, \"ignore-pull-failures\", false, \"Pull what it can and ignores images with pull failures\")\n\tcmd.Flags().BoolVar(&opts.noBuildable, \"ignore-buildable\", false, \"Ignore images that can be built\")\n\tcmd.Flags().StringVar(&opts.policy, \"policy\", \"\", `Apply pull policy (\"missing\"|\"always\")`)\n\treturn cmd\n}\n\nfunc (opts pullOptions) apply(project *types.Project, services []string) (*types.Project, error) {\n\tif !opts.includeDeps {\n\t\tvar err error\n\t\tproject, err = project.WithSelectedServices(services, types.IgnoreDependencies)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tif opts.policy != \"\" {\n\t\tfor i, service := range project.Services {\n\t\t\tif service.Image == \"\" {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tservice.PullPolicy = opts.policy\n\t\t\tproject.Services[i] = service\n\t\t}\n\t}\n\treturn project, nil\n}\n\nfunc runPull(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts pullOptions, services []string) error {\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, _, err := opts.ToProject(ctx, dockerCli, backend, services, cli.WithoutEnvironmentResolution)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, err = opts.apply(project, services)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn backend.Pull(ctx, project, api.PullOptions{\n\t\tQuiet:           opts.quiet,\n\t\tIgnoreFailures:  opts.ignorePullFailures,\n\t\tIgnoreBuildable: opts.noBuildable,\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/pullOptions_test.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc TestApplyPullOptions(t *testing.T) {\n\tproject := &types.Project{\n\t\tServices: types.Services{\n\t\t\t\"must-build\": {\n\t\t\t\tName: \"must-build\",\n\t\t\t\t// No image, local build only\n\t\t\t\tBuild: &types.BuildConfig{\n\t\t\t\t\tContext: \".\",\n\t\t\t\t},\n\t\t\t},\n\t\t\t\"has-build\": {\n\t\t\t\tName:  \"has-build\",\n\t\t\t\tImage: \"registry.example.com/myservice\",\n\t\t\t\tBuild: &types.BuildConfig{\n\t\t\t\t\tContext: \".\",\n\t\t\t\t},\n\t\t\t},\n\t\t\t\"must-pull\": {\n\t\t\t\tName:  \"must-pull\",\n\t\t\t\tImage: \"registry.example.com/another-service\",\n\t\t\t},\n\t\t},\n\t}\n\tproject, err := pullOptions{\n\t\tpolicy: types.PullPolicyMissing,\n\t}.apply(project, nil)\n\tassert.NilError(t, err)\n\n\tassert.Equal(t, project.Services[\"must-build\"].PullPolicy, \"\") // still default\n\tassert.Equal(t, project.Services[\"has-build\"].PullPolicy, types.PullPolicyMissing)\n\tassert.Equal(t, project.Services[\"must-pull\"].PullPolicy, types.PullPolicyMissing)\n}\n"
  },
  {
    "path": "cmd/compose/push.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype pushOptions struct {\n\t*ProjectOptions\n\tcomposeOptions\n\tIncludeDeps    bool\n\tIgnorefailures bool\n\tQuiet          bool\n}\n\nfunc pushCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := pushOptions{\n\t\tProjectOptions: p,\n\t}\n\tpushCmd := &cobra.Command{\n\t\tUse:   \"push [OPTIONS] [SERVICE...]\",\n\t\tShort: \"Push service images\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runPush(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tpushCmd.Flags().BoolVar(&opts.Ignorefailures, \"ignore-push-failures\", false, \"Push what it can and ignores images with push failures\")\n\tpushCmd.Flags().BoolVar(&opts.IncludeDeps, \"include-deps\", false, \"Also push images of services declared as dependencies\")\n\tpushCmd.Flags().BoolVarP(&opts.Quiet, \"quiet\", \"q\", false, \"Push without printing progress information\")\n\n\treturn pushCmd\n}\n\nfunc runPush(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts pushOptions, services []string) error {\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, _, err := opts.ToProject(ctx, dockerCli, backend, services)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif !opts.IncludeDeps {\n\t\tproject, err = project.WithSelectedServices(services, types.IgnoreDependencies)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn backend.Push(ctx, project, api.PushOptions{\n\t\tIgnoreFailures: opts.Ignorefailures,\n\t\tQuiet:          opts.Quiet,\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/remove.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype removeOptions struct {\n\t*ProjectOptions\n\tforce   bool\n\tstop    bool\n\tvolumes bool\n}\n\nfunc removeCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := removeOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"rm [OPTIONS] [SERVICE...]\",\n\t\tShort: \"Removes stopped service containers\",\n\t\tLong: `Removes stopped service containers\n\nBy default, anonymous volumes attached to containers will not be removed. You\ncan override this with -v. To list all volumes, use \"docker volume ls\".\n\nAny data which is not in a volume will be lost.`,\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runRemove(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tf := cmd.Flags()\n\tf.BoolVarP(&opts.force, \"force\", \"f\", false, \"Don't ask to confirm removal\")\n\tf.BoolVarP(&opts.stop, \"stop\", \"s\", false, \"Stop the containers, if required, before removing\")\n\tf.BoolVarP(&opts.volumes, \"volumes\", \"v\", false, \"Remove any anonymous volumes attached to containers\")\n\tf.BoolP(\"all\", \"a\", false, \"Deprecated - no effect\")\n\tf.MarkHidden(\"all\") //nolint:errcheck\n\n\treturn cmd\n}\n\nfunc runRemove(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts removeOptions, services []string) error {\n\tproject, name, err := opts.projectOrName(ctx, dockerCli, services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\terr = backend.Remove(ctx, name, api.RemoveOptions{\n\t\tServices: services,\n\t\tForce:    opts.force,\n\t\tVolumes:  opts.volumes,\n\t\tProject:  project,\n\t\tStop:     opts.stop,\n\t})\n\tif errors.Is(err, api.ErrNoResources) {\n\t\t_, _ = fmt.Fprintln(stdinfo(dockerCli), \"No stopped containers\")\n\t\treturn nil\n\t}\n\treturn err\n}\n"
  },
  {
    "path": "cmd/compose/restart.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype restartOptions struct {\n\t*ProjectOptions\n\ttimeChanged bool\n\ttimeout     int\n\tnoDeps      bool\n}\n\nfunc restartCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := restartOptions{\n\t\tProjectOptions: p,\n\t}\n\trestartCmd := &cobra.Command{\n\t\tUse:   \"restart [OPTIONS] [SERVICE...]\",\n\t\tShort: \"Restart service containers\",\n\t\tPreRun: func(cmd *cobra.Command, args []string) {\n\t\t\topts.timeChanged = cmd.Flags().Changed(\"timeout\")\n\t\t},\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runRestart(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tflags := restartCmd.Flags()\n\tflags.IntVarP(&opts.timeout, \"timeout\", \"t\", 0, \"Specify a shutdown timeout in seconds\")\n\tflags.BoolVar(&opts.noDeps, \"no-deps\", false, \"Don't restart dependent services\")\n\n\treturn restartCmd\n}\n\nfunc runRestart(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts restartOptions, services []string) error {\n\tproject, name, err := opts.projectOrName(ctx, dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif project != nil && len(services) > 0 {\n\t\tproject, err = project.WithServicesEnabled(services...)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tvar timeout *time.Duration\n\tif opts.timeChanged {\n\t\ttimeoutValue := time.Duration(opts.timeout) * time.Second\n\t\ttimeout = &timeoutValue\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn backend.Restart(ctx, name, api.RestartOptions{\n\t\tTimeout:  timeout,\n\t\tServices: services,\n\t\tProject:  project,\n\t\tNoDeps:   opts.noDeps,\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/run.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\n\tcomposecli \"github.com/compose-spec/compose-go/v2/cli\"\n\t\"github.com/compose-spec/compose-go/v2/dotenv\"\n\t\"github.com/compose-spec/compose-go/v2/format\"\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/docker/cli/opts\"\n\t\"github.com/mattn/go-shellwords\"\n\txprogress \"github.com/moby/buildkit/util/progress/progressui\"\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/spf13/cobra\"\n\t\"github.com/spf13/pflag\"\n\n\t\"github.com/docker/compose/v5/cmd/display\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\ntype runOptions struct {\n\t*composeOptions\n\tService       string\n\tCommand       []string\n\tenvironment   []string\n\tenvFiles      []string\n\tDetach        bool\n\tRemove        bool\n\tnoTty         bool\n\tinteractive   bool\n\tuser          string\n\tworkdir       string\n\tentrypoint    string\n\tentrypointCmd []string\n\tcapAdd        opts.ListOpts\n\tcapDrop       opts.ListOpts\n\tlabels        []string\n\tvolumes       []string\n\tpublish       []string\n\tuseAliases    bool\n\tservicePorts  bool\n\tname          string\n\tnoDeps        bool\n\tignoreOrphans bool\n\tremoveOrphans bool\n\tquiet         bool\n\tquietPull     bool\n}\n\nfunc (options runOptions) apply(project *types.Project) (*types.Project, error) {\n\tif options.noDeps {\n\t\tvar err error\n\t\tproject, err = project.WithSelectedServices([]string{options.Service}, types.IgnoreDependencies)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\ttarget, err := project.GetService(options.Service)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\ttarget.Tty = !options.noTty\n\ttarget.StdinOpen = options.interactive\n\n\t// --service-ports and --publish are incompatible\n\tif !options.servicePorts {\n\t\tif len(target.Ports) > 0 {\n\t\t\tlogrus.Debug(\"Running service without ports exposed as --service-ports=false\")\n\t\t}\n\t\ttarget.Ports = []types.ServicePortConfig{}\n\t\tfor _, p := range options.publish {\n\t\t\tconfig, err := types.ParsePortConfig(p)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\ttarget.Ports = append(target.Ports, config...)\n\t\t}\n\t}\n\n\tfor _, v := range options.volumes {\n\t\tvolume, err := format.ParseVolume(v)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\ttarget.Volumes = append(target.Volumes, volume)\n\t}\n\n\tfor name := range project.Services {\n\t\tif name == options.Service {\n\t\t\tproject.Services[name] = target\n\t\t\tbreak\n\t\t}\n\t}\n\treturn project, nil\n}\n\nfunc (options runOptions) getEnvironment(resolve func(string) (string, bool)) (types.Mapping, error) {\n\tenvironment := types.NewMappingWithEquals(options.environment).Resolve(resolve).ToMapping()\n\tfor _, file := range options.envFiles {\n\t\tf, err := os.Open(file)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tvars, err := dotenv.ParseWithLookup(f, func(k string) (string, bool) {\n\t\t\tvalue, ok := environment[k]\n\t\t\treturn value, ok\n\t\t})\n\t\tif err != nil {\n\t\t\treturn nil, nil\n\t\t}\n\t\tfor k, v := range vars {\n\t\t\tif _, ok := environment[k]; !ok {\n\t\t\t\tenvironment[k] = v\n\t\t\t}\n\t\t}\n\t}\n\treturn environment, nil\n}\n\nfunc runCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\toptions := runOptions{\n\t\tcomposeOptions: &composeOptions{\n\t\t\tProjectOptions: p,\n\t\t},\n\t\tcapAdd:  opts.NewListOpts(nil),\n\t\tcapDrop: opts.NewListOpts(nil),\n\t}\n\tcreateOpts := createOptions{}\n\tbuildOpts := buildOptions{\n\t\tProjectOptions: p,\n\t}\n\t// We remove the attribute from the option struct and use a dedicated var, to limit confusion and avoid anyone to use options.tty.\n\t// The tty flag is here for convenience and let user do \"docker compose run -it\" the same way as they use the \"docker run\" command.\n\tvar ttyFlag bool\n\n\tcmd := &cobra.Command{\n\t\tUse:   \"run [OPTIONS] SERVICE [COMMAND] [ARGS...]\",\n\t\tShort: \"Run a one-off command on a service\",\n\t\tArgs:  cobra.MinimumNArgs(1),\n\t\tPreRunE: AdaptCmd(func(ctx context.Context, cmd *cobra.Command, args []string) error {\n\t\t\toptions.Service = args[0]\n\t\t\tif len(args) > 1 {\n\t\t\t\toptions.Command = args[1:]\n\t\t\t}\n\t\t\tif len(options.publish) > 0 && options.servicePorts {\n\t\t\t\treturn fmt.Errorf(\"--service-ports and --publish are incompatible\")\n\t\t\t}\n\t\t\tif cmd.Flags().Changed(\"entrypoint\") {\n\t\t\t\tcommand, err := shellwords.Parse(options.entrypoint)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\toptions.entrypointCmd = command\n\t\t\t}\n\t\t\tif cmd.Flags().Changed(\"tty\") {\n\t\t\t\tif cmd.Flags().Changed(\"no-TTY\") {\n\t\t\t\t\treturn fmt.Errorf(\"--tty and --no-TTY can't be used together\")\n\t\t\t\t} else {\n\t\t\t\t\toptions.noTty = !ttyFlag\n\t\t\t\t}\n\t\t\t} else if !cmd.Flags().Changed(\"no-TTY\") && !cmd.Flags().Changed(\"interactive\") && !dockerCli.In().IsTerminal() {\n\t\t\t\t// while `docker run` requires explicit `-it` flags, Compose enables interactive mode and TTY by default\n\t\t\t\t// but when compose is used from a script that has stdin piped from another command, we just can't\n\t\t\t\t// Here, we detect we run \"by default\" (user didn't passed explicit flags) and disable TTY allocation if\n\t\t\t\t// we don't have an actual terminal to attach to for interactive mode\n\t\t\t\toptions.noTty = true\n\t\t\t}\n\n\t\t\tif options.quiet {\n\t\t\t\tdisplay.Mode = display.ModeQuiet\n\t\t\t\tbackendOptions.Add(compose.WithEventProcessor(display.Quiet()))\n\t\t\t}\n\t\t\tcreateOpts.pullChanged = cmd.Flags().Changed(\"pull\")\n\t\t\treturn nil\n\t\t}),\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tproject, _, err := p.ToProject(ctx, dockerCli, backend, []string{options.Service}, composecli.WithoutEnvironmentResolution)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tproject, err = project.WithServicesEnvironmentResolved(true)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tif createOpts.quietPull {\n\t\t\t\tbuildOpts.Progress = string(xprogress.QuietMode)\n\t\t\t}\n\n\t\t\toptions.ignoreOrphans = utils.StringToBool(project.Environment[ComposeIgnoreOrphans])\n\t\t\treturn runRun(ctx, backend, project, options, createOpts, buildOpts, dockerCli)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tflags := cmd.Flags()\n\tflags.BoolVarP(&options.Detach, \"detach\", \"d\", false, \"Run container in background and print container ID\")\n\tflags.StringArrayVarP(&options.environment, \"env\", \"e\", []string{}, \"Set environment variables\")\n\tflags.StringArrayVar(&options.envFiles, \"env-from-file\", []string{}, \"Set environment variables from file\")\n\tflags.StringArrayVarP(&options.labels, \"label\", \"l\", []string{}, \"Add or override a label\")\n\tflags.BoolVar(&options.Remove, \"rm\", false, \"Automatically remove the container when it exits\")\n\tflags.BoolVarP(&options.noTty, \"no-TTY\", \"T\", !dockerCli.Out().IsTerminal(), \"Disable pseudo-TTY allocation (default: auto-detected)\")\n\tflags.StringVar(&options.name, \"name\", \"\", \"Assign a name to the container\")\n\tflags.StringVarP(&options.user, \"user\", \"u\", \"\", \"Run as specified username or uid\")\n\tflags.StringVarP(&options.workdir, \"workdir\", \"w\", \"\", \"Working directory inside the container\")\n\tflags.StringVar(&options.entrypoint, \"entrypoint\", \"\", \"Override the entrypoint of the image\")\n\tflags.Var(&options.capAdd, \"cap-add\", \"Add Linux capabilities\")\n\tflags.Var(&options.capDrop, \"cap-drop\", \"Drop Linux capabilities\")\n\tflags.BoolVar(&options.noDeps, \"no-deps\", false, \"Don't start linked services\")\n\tflags.StringArrayVarP(&options.volumes, \"volume\", \"v\", []string{}, \"Bind mount a volume\")\n\tflags.StringArrayVarP(&options.publish, \"publish\", \"p\", []string{}, \"Publish a container's port(s) to the host\")\n\tflags.BoolVar(&options.useAliases, \"use-aliases\", false, \"Use the service's network useAliases in the network(s) the container connects to\")\n\tflags.BoolVarP(&options.servicePorts, \"service-ports\", \"P\", false, \"Run command with all service's ports enabled and mapped to the host\")\n\tflags.StringVar(&createOpts.Pull, \"pull\", \"policy\", `Pull image before running (\"always\"|\"missing\"|\"never\")`)\n\tflags.BoolVarP(&options.quiet, \"quiet\", \"q\", false, \"Don't print anything to STDOUT\")\n\tflags.BoolVar(&buildOpts.quiet, \"quiet-build\", false, \"Suppress progress output from the build process\")\n\tflags.BoolVar(&options.quietPull, \"quiet-pull\", false, \"Pull without printing progress information\")\n\tflags.BoolVar(&createOpts.Build, \"build\", false, \"Build image before starting container\")\n\tflags.BoolVar(&options.removeOrphans, \"remove-orphans\", false, \"Remove containers for services not defined in the Compose file\")\n\n\tcmd.Flags().BoolVarP(&options.interactive, \"interactive\", \"i\", true, \"Keep STDIN open even if not attached\")\n\tcmd.Flags().BoolVarP(&ttyFlag, \"tty\", \"t\", true, \"Allocate a pseudo-TTY\")\n\tcmd.Flags().MarkHidden(\"tty\") //nolint:errcheck\n\n\tflags.SetNormalizeFunc(normalizeRunFlags)\n\tflags.SetInterspersed(false)\n\treturn cmd\n}\n\nfunc normalizeRunFlags(f *pflag.FlagSet, name string) pflag.NormalizedName {\n\tswitch name {\n\tcase \"volumes\":\n\t\tname = \"volume\"\n\tcase \"labels\":\n\t\tname = \"label\"\n\t}\n\treturn pflag.NormalizedName(name)\n}\n\nfunc runRun(ctx context.Context, backend api.Compose, project *types.Project, options runOptions, createOpts createOptions, buildOpts buildOptions, dockerCli command.Cli) error {\n\tproject, err := options.apply(project)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = createOpts.Apply(project)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif err := checksForRemoteStack(ctx, dockerCli, project, buildOpts, createOpts.AssumeYes, []string{}); err != nil {\n\t\treturn err\n\t}\n\n\tlabels := types.Labels{}\n\tfor _, s := range options.labels {\n\t\tkey, val, ok := strings.Cut(s, \"=\")\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"label must be set as KEY=VALUE\")\n\t\t}\n\t\tlabels[key] = val\n\t}\n\n\tvar buildForRun *api.BuildOptions\n\tif !createOpts.noBuild {\n\t\tbo, err := buildOpts.toAPIBuildOptions(nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tbuildForRun = &bo\n\t}\n\n\tenvironment, err := options.getEnvironment(project.Environment.Resolve)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// start container and attach to container streams\n\trunOpts := api.RunOptions{\n\t\tCreateOptions: api.CreateOptions{\n\t\t\tBuild:         buildForRun,\n\t\t\tRemoveOrphans: options.removeOrphans,\n\t\t\tIgnoreOrphans: options.ignoreOrphans,\n\t\t\tQuietPull:     options.quietPull,\n\t\t},\n\t\tName:              options.name,\n\t\tService:           options.Service,\n\t\tCommand:           options.Command,\n\t\tDetach:            options.Detach,\n\t\tAutoRemove:        options.Remove,\n\t\tTty:               !options.noTty,\n\t\tInteractive:       options.interactive,\n\t\tWorkingDir:        options.workdir,\n\t\tUser:              options.user,\n\t\tCapAdd:            options.capAdd.GetSlice(),\n\t\tCapDrop:           options.capDrop.GetSlice(),\n\t\tEnvironment:       environment.Values(),\n\t\tEntrypoint:        options.entrypointCmd,\n\t\tLabels:            labels,\n\t\tUseNetworkAliases: options.useAliases,\n\t\tNoDeps:            options.noDeps,\n\t\tIndex:             0,\n\t}\n\n\tfor name, service := range project.Services {\n\t\tif name == options.Service {\n\t\t\tservice.StdinOpen = options.interactive\n\t\t\tproject.Services[name] = service\n\t\t}\n\t}\n\n\texitCode, err := backend.RunOneOffContainer(ctx, project, runOpts)\n\tif exitCode != 0 {\n\t\terrMsg := \"\"\n\t\tif err != nil {\n\t\t\terrMsg = err.Error()\n\t\t}\n\t\treturn cli.StatusError{StatusCode: exitCode, Status: errMsg}\n\t}\n\treturn err\n}\n"
  },
  {
    "path": "cmd/compose/scale.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"maps\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype scaleOptions struct {\n\t*ProjectOptions\n\tnoDeps bool\n}\n\nfunc scaleCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := scaleOptions{\n\t\tProjectOptions: p,\n\t}\n\tscaleCmd := &cobra.Command{\n\t\tUse:   \"scale [SERVICE=REPLICAS...]\",\n\t\tShort: \"Scale services \",\n\t\tArgs:  cobra.MinimumNArgs(1),\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\tserviceTuples, err := parseServicesReplicasArgs(args)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\treturn runScale(ctx, dockerCli, backendOptions, opts, serviceTuples)\n\t\t}),\n\t\tValidArgsFunction: completeScaleArgs(dockerCli, p),\n\t}\n\tflags := scaleCmd.Flags()\n\tflags.BoolVar(&opts.noDeps, \"no-deps\", false, \"Don't start linked services\")\n\n\treturn scaleCmd\n}\n\nfunc runScale(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts scaleOptions, serviceReplicaTuples map[string]int) error {\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tservices := slices.Sorted(maps.Keys(serviceReplicaTuples))\n\tproject, _, err := opts.ToProject(ctx, dockerCli, backend, services)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif opts.noDeps {\n\t\tif project, err = project.WithSelectedServices(services, types.IgnoreDependencies); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tfor key, value := range serviceReplicaTuples {\n\t\tservice, err := project.GetService(key)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tservice.SetScale(value)\n\t\tproject.Services[key] = service\n\t}\n\n\treturn backend.Scale(ctx, project, api.ScaleOptions{Services: services})\n}\n\nfunc parseServicesReplicasArgs(args []string) (map[string]int, error) {\n\tserviceReplicaTuples := map[string]int{}\n\tfor _, arg := range args {\n\t\tkey, val, ok := strings.Cut(arg, \"=\")\n\t\tif !ok || key == \"\" || val == \"\" {\n\t\t\treturn nil, fmt.Errorf(\"invalid scale specifier: %s\", arg)\n\t\t}\n\t\tintValue, err := strconv.Atoi(val)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"invalid scale specifier: can't parse replica value as int: %v\", arg)\n\t\t}\n\t\tserviceReplicaTuples[key] = intValue\n\t}\n\treturn serviceReplicaTuples, nil\n}\n"
  },
  {
    "path": "cmd/compose/start.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype startOptions struct {\n\t*ProjectOptions\n\twait        bool\n\twaitTimeout int\n}\n\nfunc startCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := startOptions{\n\t\tProjectOptions: p,\n\t}\n\tstartCmd := &cobra.Command{\n\t\tUse:   \"start [SERVICE...]\",\n\t\tShort: \"Start services\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runStart(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tflags := startCmd.Flags()\n\tflags.BoolVar(&opts.wait, \"wait\", false, \"Wait for services to be running|healthy. Implies detached mode.\")\n\tflags.IntVar(&opts.waitTimeout, \"wait-timeout\", 0, \"Maximum duration in seconds to wait for the project to be running|healthy\")\n\n\treturn startCmd\n}\n\nfunc runStart(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts startOptions, services []string) error {\n\tproject, name, err := opts.projectOrName(ctx, dockerCli, services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar timeout time.Duration\n\tif opts.waitTimeout > 0 {\n\t\ttimeout = time.Duration(opts.waitTimeout) * time.Second\n\t}\n\treturn backend.Start(ctx, name, api.StartOptions{\n\t\tAttachTo:    services,\n\t\tProject:     project,\n\t\tServices:    services,\n\t\tWait:        opts.wait,\n\t\tWaitTimeout: timeout,\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/stats.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/docker/cli/cli/command/container\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\ntype statsOptions struct {\n\tProjectOptions *ProjectOptions\n\tall            bool\n\tformat         string\n\tnoStream       bool\n\tnoTrunc        bool\n}\n\nfunc statsCommand(p *ProjectOptions, dockerCli command.Cli) *cobra.Command {\n\topts := statsOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"stats [OPTIONS] [SERVICE]\",\n\t\tShort: \"Display a live stream of container(s) resource usage statistics\",\n\t\tArgs:  cobra.MaximumNArgs(1),\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runStats(ctx, dockerCli, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tflags := cmd.Flags()\n\tflags.BoolVarP(&opts.all, \"all\", \"a\", false, \"Show all containers (default shows just running)\")\n\tflags.StringVar(&opts.format, \"format\", \"\", `Format output using a custom template:\n'table':            Print output in table format with column headers (default)\n'table TEMPLATE':   Print output in table format using the given Go template\n'json':             Print in JSON format\n'TEMPLATE':         Print output using the given Go template.\nRefer to https://docs.docker.com/engine/cli/formatting/ for more information about formatting output with templates`)\n\tflags.BoolVar(&opts.noStream, \"no-stream\", false, \"Disable streaming stats and only pull the first result\")\n\tflags.BoolVar(&opts.noTrunc, \"no-trunc\", false, \"Do not truncate output\")\n\treturn cmd\n}\n\nfunc runStats(ctx context.Context, dockerCli command.Cli, opts statsOptions, service []string) error {\n\tname, err := opts.ProjectOptions.toProjectName(ctx, dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\tf := client.Filters{}\n\tf.Add(\"label\", fmt.Sprintf(\"%s=%s\", api.ProjectLabel, name))\n\n\tif len(service) > 0 {\n\t\tf.Add(\"label\", fmt.Sprintf(\"%s=%s\", api.ServiceLabel, service[0]))\n\t}\n\treturn container.RunStats(ctx, dockerCli, &container.StatsOptions{\n\t\tAll:      opts.all,\n\t\tNoStream: opts.noStream,\n\t\tNoTrunc:  opts.noTrunc,\n\t\tFormat:   opts.format,\n\t\tFilters:  f,\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/stop.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype stopOptions struct {\n\t*ProjectOptions\n\ttimeChanged bool\n\ttimeout     int\n}\n\nfunc stopCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := stopOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"stop [OPTIONS] [SERVICE...]\",\n\t\tShort: \"Stop services\",\n\t\tPreRun: func(cmd *cobra.Command, args []string) {\n\t\t\topts.timeChanged = cmd.Flags().Changed(\"timeout\")\n\t\t},\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runStop(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tflags := cmd.Flags()\n\tflags.IntVarP(&opts.timeout, \"timeout\", \"t\", 0, \"Specify a shutdown timeout in seconds\")\n\n\treturn cmd\n}\n\nfunc runStop(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts stopOptions, services []string) error {\n\tproject, name, err := opts.projectOrName(ctx, dockerCli, services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar timeout *time.Duration\n\tif opts.timeChanged {\n\t\ttimeoutValue := time.Duration(opts.timeout) * time.Second\n\t\ttimeout = &timeoutValue\n\t}\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn backend.Stop(ctx, name, api.StopOptions{\n\t\tTimeout:  timeout,\n\t\tServices: services,\n\t\tProject:  project,\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/top.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"sort\"\n\t\"strings\"\n\t\"text/tabwriter\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype topOptions struct {\n\t*ProjectOptions\n}\n\nfunc topCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := topOptions{\n\t\tProjectOptions: p,\n\t}\n\ttopCmd := &cobra.Command{\n\t\tUse:   \"top [SERVICES...]\",\n\t\tShort: \"Display the running processes\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runTop(ctx, dockerCli, backendOptions, opts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\treturn topCmd\n}\n\ntype (\n\ttopHeader  map[string]int // maps a proc title to its output index\n\ttopEntries map[string]string\n)\n\nfunc runTop(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts topOptions, services []string) error {\n\tprojectName, err := opts.toProjectName(ctx, dockerCli)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\tcontainers, err := backend.Top(ctx, projectName, services)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tsort.Slice(containers, func(i, j int) bool {\n\t\treturn containers[i].Name < containers[j].Name\n\t})\n\n\theader, entries := collectTop(containers)\n\treturn topPrint(dockerCli.Out(), header, entries)\n}\n\nfunc collectTop(containers []api.ContainerProcSummary) (topHeader, []topEntries) {\n\t// map column name to its header (should keep working if backend.Top returns\n\t// varying columns for different containers)\n\theader := topHeader{\"SERVICE\": 0, \"#\": 1}\n\n\t// assume one process per container and grow if needed\n\tentries := make([]topEntries, 0, len(containers))\n\n\tfor _, container := range containers {\n\t\tfor _, proc := range container.Processes {\n\t\t\tentry := topEntries{\n\t\t\t\t\"SERVICE\": container.Service,\n\t\t\t\t\"#\":       container.Replica,\n\t\t\t}\n\t\t\tfor i, title := range container.Titles {\n\t\t\t\tif _, exists := header[title]; !exists {\n\t\t\t\t\theader[title] = len(header)\n\t\t\t\t}\n\t\t\t\tentry[title] = proc[i]\n\t\t\t}\n\t\t\tentries = append(entries, entry)\n\t\t}\n\t}\n\n\t// ensure CMD is the right-most column\n\tif pos, ok := header[\"CMD\"]; ok {\n\t\tmaxPos := pos\n\t\tfor h, i := range header {\n\t\t\tif i > maxPos {\n\t\t\t\tmaxPos = i\n\t\t\t}\n\t\t\tif i > pos {\n\t\t\t\theader[h] = i - 1\n\t\t\t}\n\t\t}\n\t\theader[\"CMD\"] = maxPos\n\t}\n\n\treturn header, entries\n}\n\nfunc topPrint(out io.Writer, headers topHeader, rows []topEntries) error {\n\tif len(rows) == 0 {\n\t\treturn nil\n\t}\n\n\tw := tabwriter.NewWriter(out, 4, 1, 2, ' ', 0)\n\n\t// write headers in the order we've encountered them\n\th := make([]string, len(headers))\n\tfor title, index := range headers {\n\t\th[index] = title\n\t}\n\t_, _ = fmt.Fprintln(w, strings.Join(h, \"\\t\"))\n\n\tfor _, row := range rows {\n\t\t// write proc data in header order\n\t\tr := make([]string, len(headers))\n\t\tfor title, index := range headers {\n\t\t\tif v, ok := row[title]; ok {\n\t\t\t\tr[index] = v\n\t\t\t} else {\n\t\t\t\tr[index] = \"-\"\n\t\t\t}\n\t\t}\n\t\t_, _ = fmt.Fprintln(w, strings.Join(r, \"\\t\"))\n\t}\n\treturn w.Flush()\n}\n"
  },
  {
    "path": "cmd/compose/top_test.go",
    "content": "/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"bytes\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/require\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nvar topTestCases = []struct {\n\tname   string\n\ttitles []string\n\tprocs  [][]string\n\n\theader  topHeader\n\tentries []topEntries\n\toutput  string\n}{\n\t{\n\t\tname:    \"noprocs\",\n\t\ttitles:  []string{\"UID\", \"PID\", \"PPID\", \"C\", \"STIME\", \"TTY\", \"TIME\", \"CMD\"},\n\t\tprocs:   [][]string{},\n\t\theader:  topHeader{\"SERVICE\": 0, \"#\": 1},\n\t\tentries: []topEntries{},\n\t\toutput:  \"\",\n\t},\n\t{\n\t\tname:   \"simple\",\n\t\ttitles: []string{\"UID\", \"PID\", \"PPID\", \"C\", \"STIME\", \"TTY\", \"TIME\", \"CMD\"},\n\t\tprocs:  [][]string{{\"root\", \"1\", \"1\", \"0\", \"12:00\", \"?\", \"00:00:01\", \"/entrypoint\"}},\n\t\theader: topHeader{\n\t\t\t\"SERVICE\": 0,\n\t\t\t\"#\":       1,\n\t\t\t\"UID\":     2,\n\t\t\t\"PID\":     3,\n\t\t\t\"PPID\":    4,\n\t\t\t\"C\":       5,\n\t\t\t\"STIME\":   6,\n\t\t\t\"TTY\":     7,\n\t\t\t\"TIME\":    8,\n\t\t\t\"CMD\":     9,\n\t\t},\n\t\tentries: []topEntries{\n\t\t\t{\n\t\t\t\t\"SERVICE\": \"simple\",\n\t\t\t\t\"#\":       \"1\",\n\t\t\t\t\"UID\":     \"root\",\n\t\t\t\t\"PID\":     \"1\",\n\t\t\t\t\"PPID\":    \"1\",\n\t\t\t\t\"C\":       \"0\",\n\t\t\t\t\"STIME\":   \"12:00\",\n\t\t\t\t\"TTY\":     \"?\",\n\t\t\t\t\"TIME\":    \"00:00:01\",\n\t\t\t\t\"CMD\":     \"/entrypoint\",\n\t\t\t},\n\t\t},\n\t\toutput: trim(`\n\t\t\tSERVICE  #   UID   PID  PPID  C   STIME  TTY  TIME      CMD\n\t\t\tsimple   1   root  1    1     0   12:00  ?    00:00:01  /entrypoint\n\t\t`),\n\t},\n\t{\n\t\tname:   \"noppid\",\n\t\ttitles: []string{\"UID\", \"PID\", \"C\", \"STIME\", \"TTY\", \"TIME\", \"CMD\"},\n\t\tprocs:  [][]string{{\"root\", \"1\", \"0\", \"12:00\", \"?\", \"00:00:02\", \"/entrypoint\"}},\n\t\theader: topHeader{\n\t\t\t\"SERVICE\": 0,\n\t\t\t\"#\":       1,\n\t\t\t\"UID\":     2,\n\t\t\t\"PID\":     3,\n\t\t\t\"C\":       4,\n\t\t\t\"STIME\":   5,\n\t\t\t\"TTY\":     6,\n\t\t\t\"TIME\":    7,\n\t\t\t\"CMD\":     8,\n\t\t},\n\t\tentries: []topEntries{\n\t\t\t{\n\t\t\t\t\"SERVICE\": \"noppid\",\n\t\t\t\t\"#\":       \"1\",\n\t\t\t\t\"UID\":     \"root\",\n\t\t\t\t\"PID\":     \"1\",\n\t\t\t\t\"C\":       \"0\",\n\t\t\t\t\"STIME\":   \"12:00\",\n\t\t\t\t\"TTY\":     \"?\",\n\t\t\t\t\"TIME\":    \"00:00:02\",\n\t\t\t\t\"CMD\":     \"/entrypoint\",\n\t\t\t},\n\t\t},\n\t\toutput: trim(`\n\t\t\tSERVICE  #   UID   PID  C   STIME  TTY  TIME      CMD\n\t\t\tnoppid   1   root  1    0   12:00  ?    00:00:02  /entrypoint\n\t\t`),\n\t},\n\t{\n\t\tname:   \"extra-hdr\",\n\t\ttitles: []string{\"UID\", \"GID\", \"PID\", \"PPID\", \"C\", \"STIME\", \"TTY\", \"TIME\", \"CMD\"},\n\t\tprocs:  [][]string{{\"root\", \"1\", \"1\", \"1\", \"0\", \"12:00\", \"?\", \"00:00:03\", \"/entrypoint\"}},\n\t\theader: topHeader{\n\t\t\t\"SERVICE\": 0,\n\t\t\t\"#\":       1,\n\t\t\t\"UID\":     2,\n\t\t\t\"GID\":     3,\n\t\t\t\"PID\":     4,\n\t\t\t\"PPID\":    5,\n\t\t\t\"C\":       6,\n\t\t\t\"STIME\":   7,\n\t\t\t\"TTY\":     8,\n\t\t\t\"TIME\":    9,\n\t\t\t\"CMD\":     10,\n\t\t},\n\t\tentries: []topEntries{\n\t\t\t{\n\t\t\t\t\"SERVICE\": \"extra-hdr\",\n\t\t\t\t\"#\":       \"1\",\n\t\t\t\t\"UID\":     \"root\",\n\t\t\t\t\"GID\":     \"1\",\n\t\t\t\t\"PID\":     \"1\",\n\t\t\t\t\"PPID\":    \"1\",\n\t\t\t\t\"C\":       \"0\",\n\t\t\t\t\"STIME\":   \"12:00\",\n\t\t\t\t\"TTY\":     \"?\",\n\t\t\t\t\"TIME\":    \"00:00:03\",\n\t\t\t\t\"CMD\":     \"/entrypoint\",\n\t\t\t},\n\t\t},\n\t\toutput: trim(`\n\t\t\tSERVICE    #   UID   GID  PID  PPID  C   STIME  TTY  TIME      CMD\n\t\t\textra-hdr  1   root  1    1    1     0   12:00  ?    00:00:03  /entrypoint\n\t\t`),\n\t},\n\t{\n\t\tname:   \"multiple\",\n\t\ttitles: []string{\"UID\", \"PID\", \"PPID\", \"C\", \"STIME\", \"TTY\", \"TIME\", \"CMD\"},\n\t\tprocs: [][]string{\n\t\t\t{\"root\", \"1\", \"1\", \"0\", \"12:00\", \"?\", \"00:00:04\", \"/entrypoint\"},\n\t\t\t{\"root\", \"123\", \"1\", \"0\", \"12:00\", \"?\", \"00:00:42\", \"sleep infinity\"},\n\t\t},\n\t\theader: topHeader{\n\t\t\t\"SERVICE\": 0,\n\t\t\t\"#\":       1,\n\t\t\t\"UID\":     2,\n\t\t\t\"PID\":     3,\n\t\t\t\"PPID\":    4,\n\t\t\t\"C\":       5,\n\t\t\t\"STIME\":   6,\n\t\t\t\"TTY\":     7,\n\t\t\t\"TIME\":    8,\n\t\t\t\"CMD\":     9,\n\t\t},\n\t\tentries: []topEntries{\n\t\t\t{\n\t\t\t\t\"SERVICE\": \"multiple\",\n\t\t\t\t\"#\":       \"1\",\n\t\t\t\t\"UID\":     \"root\",\n\t\t\t\t\"PID\":     \"1\",\n\t\t\t\t\"PPID\":    \"1\",\n\t\t\t\t\"C\":       \"0\",\n\t\t\t\t\"STIME\":   \"12:00\",\n\t\t\t\t\"TTY\":     \"?\",\n\t\t\t\t\"TIME\":    \"00:00:04\",\n\t\t\t\t\"CMD\":     \"/entrypoint\",\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"SERVICE\": \"multiple\",\n\t\t\t\t\"#\":       \"1\",\n\t\t\t\t\"UID\":     \"root\",\n\t\t\t\t\"PID\":     \"123\",\n\t\t\t\t\"PPID\":    \"1\",\n\t\t\t\t\"C\":       \"0\",\n\t\t\t\t\"STIME\":   \"12:00\",\n\t\t\t\t\"TTY\":     \"?\",\n\t\t\t\t\"TIME\":    \"00:00:42\",\n\t\t\t\t\"CMD\":     \"sleep infinity\",\n\t\t\t},\n\t\t},\n\t\toutput: trim(`\n\t\t\tSERVICE   #   UID   PID  PPID  C   STIME  TTY  TIME      CMD\n\t\t\tmultiple  1   root  1    1     0   12:00  ?    00:00:04  /entrypoint\n\t\t\tmultiple  1   root  123  1     0   12:00  ?    00:00:42  sleep infinity\n\t\t`),\n\t},\n}\n\n// TestRunTopCore only tests the core functionality of runTop: formatting\n// and printing of the output of (api.Compose).Top().\nfunc TestRunTopCore(t *testing.T) {\n\tt.Parallel()\n\n\tall := []api.ContainerProcSummary{}\n\n\tfor _, tc := range topTestCases {\n\t\tsummary := api.ContainerProcSummary{\n\t\t\tName:      \"not used\",\n\t\t\tTitles:    tc.titles,\n\t\t\tProcesses: tc.procs,\n\t\t\tService:   tc.name,\n\t\t\tReplica:   \"1\",\n\t\t}\n\t\tall = append(all, summary)\n\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\theader, entries := collectTop([]api.ContainerProcSummary{summary})\n\t\t\tassert.Equal(t, tc.header, header)\n\t\t\tassert.Equal(t, tc.entries, entries)\n\n\t\t\tvar buf bytes.Buffer\n\t\t\terr := topPrint(&buf, header, entries)\n\n\t\t\trequire.NoError(t, err)\n\t\t\tassert.Equal(t, tc.output, buf.String())\n\t\t})\n\t}\n\n\tt.Run(\"all\", func(t *testing.T) {\n\t\theader, entries := collectTop(all)\n\t\tassert.Equal(t, topHeader{\n\t\t\t\"SERVICE\": 0,\n\t\t\t\"#\":       1,\n\t\t\t\"UID\":     2,\n\t\t\t\"PID\":     3,\n\t\t\t\"PPID\":    4,\n\t\t\t\"C\":       5,\n\t\t\t\"STIME\":   6,\n\t\t\t\"TTY\":     7,\n\t\t\t\"TIME\":    8,\n\t\t\t\"GID\":     9,\n\t\t\t\"CMD\":     10,\n\t\t}, header)\n\t\tassert.Equal(t, []topEntries{\n\t\t\t{\n\t\t\t\t\"SERVICE\": \"simple\",\n\t\t\t\t\"#\":       \"1\",\n\t\t\t\t\"UID\":     \"root\",\n\t\t\t\t\"PID\":     \"1\",\n\t\t\t\t\"PPID\":    \"1\",\n\t\t\t\t\"C\":       \"0\",\n\t\t\t\t\"STIME\":   \"12:00\",\n\t\t\t\t\"TTY\":     \"?\",\n\t\t\t\t\"TIME\":    \"00:00:01\",\n\t\t\t\t\"CMD\":     \"/entrypoint\",\n\t\t\t}, {\n\t\t\t\t\"SERVICE\": \"noppid\",\n\t\t\t\t\"#\":       \"1\",\n\t\t\t\t\"UID\":     \"root\",\n\t\t\t\t\"PID\":     \"1\",\n\t\t\t\t\"C\":       \"0\",\n\t\t\t\t\"STIME\":   \"12:00\",\n\t\t\t\t\"TTY\":     \"?\",\n\t\t\t\t\"TIME\":    \"00:00:02\",\n\t\t\t\t\"CMD\":     \"/entrypoint\",\n\t\t\t}, {\n\t\t\t\t\"SERVICE\": \"extra-hdr\",\n\t\t\t\t\"#\":       \"1\",\n\t\t\t\t\"UID\":     \"root\",\n\t\t\t\t\"GID\":     \"1\",\n\t\t\t\t\"PID\":     \"1\",\n\t\t\t\t\"PPID\":    \"1\",\n\t\t\t\t\"C\":       \"0\",\n\t\t\t\t\"STIME\":   \"12:00\",\n\t\t\t\t\"TTY\":     \"?\",\n\t\t\t\t\"TIME\":    \"00:00:03\",\n\t\t\t\t\"CMD\":     \"/entrypoint\",\n\t\t\t}, {\n\t\t\t\t\"SERVICE\": \"multiple\",\n\t\t\t\t\"#\":       \"1\",\n\t\t\t\t\"UID\":     \"root\",\n\t\t\t\t\"PID\":     \"1\",\n\t\t\t\t\"PPID\":    \"1\",\n\t\t\t\t\"C\":       \"0\",\n\t\t\t\t\"STIME\":   \"12:00\",\n\t\t\t\t\"TTY\":     \"?\",\n\t\t\t\t\"TIME\":    \"00:00:04\",\n\t\t\t\t\"CMD\":     \"/entrypoint\",\n\t\t\t}, {\n\t\t\t\t\"SERVICE\": \"multiple\",\n\t\t\t\t\"#\":       \"1\",\n\t\t\t\t\"UID\":     \"root\",\n\t\t\t\t\"PID\":     \"123\",\n\t\t\t\t\"PPID\":    \"1\",\n\t\t\t\t\"C\":       \"0\",\n\t\t\t\t\"STIME\":   \"12:00\",\n\t\t\t\t\"TTY\":     \"?\",\n\t\t\t\t\"TIME\":    \"00:00:42\",\n\t\t\t\t\"CMD\":     \"sleep infinity\",\n\t\t\t},\n\t\t}, entries)\n\n\t\tvar buf bytes.Buffer\n\t\terr := topPrint(&buf, header, entries)\n\t\trequire.NoError(t, err)\n\t\tassert.Equal(t, trim(`\n\t\t\tSERVICE    #   UID   PID  PPID  C   STIME  TTY  TIME      GID  CMD\n\t\t\tsimple     1   root  1    1     0   12:00  ?    00:00:01  -    /entrypoint\n\t\t\tnoppid     1   root  1    -     0   12:00  ?    00:00:02  -    /entrypoint\n\t\t\textra-hdr  1   root  1    1     0   12:00  ?    00:00:03  1    /entrypoint\n\t\t\tmultiple   1   root  1    1     0   12:00  ?    00:00:04  -    /entrypoint\n\t\t\tmultiple   1   root  123  1     0   12:00  ?    00:00:42  -    sleep infinity\n\t\t`), buf.String())\n\t})\n}\n\nfunc trim(s string) string {\n\tvar out bytes.Buffer\n\tfor line := range strings.SplitSeq(strings.TrimSpace(s), \"\\n\") {\n\t\tout.WriteString(strings.TrimSpace(line))\n\t\tout.WriteRune('\\n')\n\t}\n\treturn out.String()\n}\n"
  },
  {
    "path": "cmd/compose/up.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/command\"\n\txprogress \"github.com/moby/buildkit/util/progress/progressui\"\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/spf13/cobra\"\n\t\"github.com/spf13/pflag\"\n\n\t\"github.com/docker/compose/v5/cmd/display\"\n\t\"github.com/docker/compose/v5/cmd/formatter\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\n// composeOptions hold options common to `up` and `run` to run compose project\ntype composeOptions struct {\n\t*ProjectOptions\n}\n\ntype upOptions struct {\n\t*composeOptions\n\tDetach                bool\n\tnoStart               bool\n\tnoDeps                bool\n\tcascadeStop           bool\n\tcascadeFail           bool\n\texitCodeFrom          string\n\tnoColor               bool\n\tnoPrefix              bool\n\tattachDependencies    bool\n\tattach                []string\n\tnoAttach              []string\n\ttimestamp             bool\n\twait                  bool\n\twaitTimeout           int\n\twatch                 bool\n\tnavigationMenu        bool\n\tnavigationMenuChanged bool\n}\n\nfunc (opts upOptions) apply(project *types.Project, services []string) (*types.Project, error) {\n\tif opts.noDeps {\n\t\tvar err error\n\t\tproject, err = project.WithSelectedServices(services, types.IgnoreDependencies)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tif opts.exitCodeFrom != \"\" {\n\t\t_, err := project.GetService(opts.exitCodeFrom)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\treturn project, nil\n}\n\nfunc (opts *upOptions) validateNavigationMenu(dockerCli command.Cli) {\n\tif !dockerCli.Out().IsTerminal() {\n\t\topts.navigationMenu = false\n\t\treturn\n\t}\n\t// If --menu flag was not set\n\tif !opts.navigationMenuChanged {\n\t\tif envVar, ok := os.LookupEnv(ComposeMenu); ok {\n\t\t\topts.navigationMenu = utils.StringToBool(envVar)\n\t\t\treturn\n\t\t}\n\t\t// ...and COMPOSE_MENU env var is not defined we want the default value to be true\n\t\topts.navigationMenu = true\n\t}\n}\n\nfunc (opts upOptions) OnExit() api.Cascade {\n\tswitch {\n\tcase opts.cascadeStop:\n\t\treturn api.CascadeStop\n\tcase opts.cascadeFail:\n\t\treturn api.CascadeFail\n\tdefault:\n\t\treturn api.CascadeIgnore\n\t}\n}\n\nfunc upCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\tup := upOptions{}\n\tcreate := createOptions{}\n\tbuild := buildOptions{ProjectOptions: p}\n\tupCmd := &cobra.Command{\n\t\tUse:   \"up [OPTIONS] [SERVICE...]\",\n\t\tShort: \"Create and start containers\",\n\t\tPreRunE: AdaptCmd(func(ctx context.Context, cmd *cobra.Command, args []string) error {\n\t\t\tcreate.pullChanged = cmd.Flags().Changed(\"pull\")\n\t\t\tcreate.timeChanged = cmd.Flags().Changed(\"timeout\")\n\t\t\tup.navigationMenuChanged = cmd.Flags().Changed(\"menu\")\n\t\t\tif !cmd.Flags().Changed(\"remove-orphans\") {\n\t\t\t\tcreate.removeOrphans = utils.StringToBool(os.Getenv(ComposeRemoveOrphans))\n\t\t\t}\n\t\t\treturn validateFlags(&up, &create)\n\t\t}),\n\t\tRunE: p.WithServices(dockerCli, func(ctx context.Context, project *types.Project, services []string) error {\n\t\t\tcreate.ignoreOrphans = utils.StringToBool(project.Environment[ComposeIgnoreOrphans])\n\t\t\tif create.ignoreOrphans && create.removeOrphans {\n\t\t\t\treturn fmt.Errorf(\"cannot combine %s and --remove-orphans\", ComposeIgnoreOrphans)\n\t\t\t}\n\t\t\tif len(up.attach) != 0 && up.attachDependencies {\n\t\t\t\treturn errors.New(\"cannot combine --attach and --attach-dependencies\")\n\t\t\t}\n\n\t\t\tup.validateNavigationMenu(dockerCli)\n\n\t\t\tif !p.All && len(project.Services) == 0 {\n\t\t\t\treturn fmt.Errorf(\"no service selected\")\n\t\t\t}\n\n\t\t\treturn runUp(ctx, dockerCli, backendOptions, create, up, build, project, services)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\tflags := upCmd.Flags()\n\tflags.BoolVarP(&up.Detach, \"detach\", \"d\", false, \"Detached mode: Run containers in the background\")\n\tflags.BoolVar(&create.Build, \"build\", false, \"Build images before starting containers\")\n\tflags.BoolVar(&create.noBuild, \"no-build\", false, \"Don't build an image, even if it's policy\")\n\tflags.StringVar(&create.Pull, \"pull\", \"policy\", `Pull image before running (\"always\"|\"missing\"|\"never\")`)\n\tflags.BoolVar(&create.removeOrphans, \"remove-orphans\", false, \"Remove containers for services not defined in the Compose file\")\n\tflags.StringArrayVar(&create.scale, \"scale\", []string{}, \"Scale SERVICE to NUM instances. Overrides the `scale` setting in the Compose file if present.\")\n\tflags.BoolVar(&up.noColor, \"no-color\", false, \"Produce monochrome output\")\n\tflags.BoolVar(&up.noPrefix, \"no-log-prefix\", false, \"Don't print prefix in logs\")\n\tflags.BoolVar(&create.forceRecreate, \"force-recreate\", false, \"Recreate containers even if their configuration and image haven't changed\")\n\tflags.BoolVar(&create.noRecreate, \"no-recreate\", false, \"If containers already exist, don't recreate them. Incompatible with --force-recreate.\")\n\tflags.BoolVar(&up.noStart, \"no-start\", false, \"Don't start the services after creating them\")\n\tflags.BoolVar(&up.cascadeStop, \"abort-on-container-exit\", false, \"Stops all containers if any container was stopped. Incompatible with -d\")\n\tflags.BoolVar(&up.cascadeFail, \"abort-on-container-failure\", false, \"Stops all containers if any container exited with failure. Incompatible with -d\")\n\tflags.StringVar(&up.exitCodeFrom, \"exit-code-from\", \"\", \"Return the exit code of the selected service container. Implies --abort-on-container-exit\")\n\tflags.IntVarP(&create.timeout, \"timeout\", \"t\", 0, \"Use this timeout in seconds for container shutdown when attached or when containers are already running\")\n\tflags.BoolVar(&up.timestamp, \"timestamps\", false, \"Show timestamps\")\n\tflags.BoolVar(&up.noDeps, \"no-deps\", false, \"Don't start linked services\")\n\tflags.BoolVar(&create.recreateDeps, \"always-recreate-deps\", false, \"Recreate dependent containers. Incompatible with --no-recreate.\")\n\tflags.BoolVarP(&create.noInherit, \"renew-anon-volumes\", \"V\", false, \"Recreate anonymous volumes instead of retrieving data from the previous containers\")\n\tflags.BoolVar(&create.quietPull, \"quiet-pull\", false, \"Pull without printing progress information\")\n\tflags.BoolVar(&build.quiet, \"quiet-build\", false, \"Suppress the build output\")\n\tflags.StringArrayVar(&up.attach, \"attach\", []string{}, \"Restrict attaching to the specified services. Incompatible with --attach-dependencies.\")\n\tflags.StringArrayVar(&up.noAttach, \"no-attach\", []string{}, \"Do not attach (stream logs) to the specified services\")\n\tflags.BoolVar(&up.attachDependencies, \"attach-dependencies\", false, \"Automatically attach to log output of dependent services\")\n\tflags.BoolVar(&up.wait, \"wait\", false, \"Wait for services to be running|healthy. Implies detached mode.\")\n\tflags.IntVar(&up.waitTimeout, \"wait-timeout\", 0, \"Maximum duration in seconds to wait for the project to be running|healthy\")\n\tflags.BoolVarP(&up.watch, \"watch\", \"w\", false, \"Watch source code and rebuild/refresh containers when files are updated.\")\n\tflags.BoolVar(&up.navigationMenu, \"menu\", false, \"Enable interactive shortcuts when running attached. Incompatible with --detach. Can also be enable/disable by setting COMPOSE_MENU environment var.\")\n\tflags.BoolVarP(&create.AssumeYes, \"yes\", \"y\", false, `Assume \"yes\" as answer to all prompts and run non-interactively`)\n\tflags.SetNormalizeFunc(func(f *pflag.FlagSet, name string) pflag.NormalizedName {\n\t\t// assumeYes was introduced by mistake as `--y`\n\t\tif name == \"y\" {\n\t\t\tlogrus.Warn(\"--y is deprecated, please use --yes instead\")\n\t\t\tname = \"yes\"\n\t\t}\n\t\treturn pflag.NormalizedName(name)\n\t})\n\treturn upCmd\n}\n\n//nolint:gocyclo\nfunc validateFlags(up *upOptions, create *createOptions) error {\n\tif up.waitTimeout < 0 {\n\t\treturn fmt.Errorf(\"--wait-timeout must be a non-negative integer\")\n\t}\n\tif up.exitCodeFrom != \"\" && !up.cascadeFail {\n\t\tup.cascadeStop = true\n\t}\n\tif up.cascadeStop && up.cascadeFail {\n\t\treturn fmt.Errorf(\"--abort-on-container-failure cannot be combined with --abort-on-container-exit\")\n\t}\n\tif up.wait {\n\t\tif up.attachDependencies || up.cascadeStop || len(up.attach) > 0 {\n\t\t\treturn fmt.Errorf(\"--wait cannot be combined with --abort-on-container-exit, --attach or --attach-dependencies\")\n\t\t}\n\t\tup.Detach = true\n\t}\n\tif create.Build && create.noBuild {\n\t\treturn fmt.Errorf(\"--build and --no-build are incompatible\")\n\t}\n\tif up.Detach && (up.attachDependencies || up.cascadeStop || up.cascadeFail || len(up.attach) > 0 || up.watch) {\n\t\tif up.wait {\n\t\t\treturn fmt.Errorf(\"--wait cannot be combined with --abort-on-container-exit, --abort-on-container-failure, --attach, --attach-dependencies or --watch\")\n\t\t} else {\n\t\t\treturn fmt.Errorf(\"--detach cannot be combined with --abort-on-container-exit, --abort-on-container-failure, --attach, --attach-dependencies or --watch\")\n\t\t}\n\t}\n\tif create.noInherit && create.noRecreate {\n\t\treturn fmt.Errorf(\"--no-recreate and --renew-anon-volumes are incompatible\")\n\t}\n\tif create.forceRecreate && create.noRecreate {\n\t\treturn fmt.Errorf(\"--force-recreate and --no-recreate are incompatible\")\n\t}\n\tif create.recreateDeps && create.noRecreate {\n\t\treturn fmt.Errorf(\"--always-recreate-deps and --no-recreate are incompatible\")\n\t}\n\tif create.noBuild && up.watch {\n\t\treturn fmt.Errorf(\"--no-build and --watch are incompatible\")\n\t}\n\treturn nil\n}\n\n//nolint:gocyclo\nfunc runUp(\n\tctx context.Context,\n\tdockerCli command.Cli,\n\tbackendOptions *BackendOptions,\n\tcreateOptions createOptions,\n\tupOptions upOptions,\n\tbuildOptions buildOptions,\n\tproject *types.Project,\n\tservices []string,\n) error {\n\tif err := checksForRemoteStack(ctx, dockerCli, project, buildOptions, createOptions.AssumeYes, []string{}); err != nil {\n\t\treturn err\n\t}\n\n\terr := createOptions.Apply(project)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, err = upOptions.apply(project, services)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar build *api.BuildOptions\n\tif !createOptions.noBuild {\n\t\tif createOptions.quietPull {\n\t\t\tbuildOptions.Progress = string(xprogress.QuietMode)\n\t\t}\n\t\t// BuildOptions here is nested inside CreateOptions, so\n\t\t// no service list is passed, it will implicitly pick all\n\t\t// services being created, which includes any explicitly\n\t\t// specified via \"services\" arg here as well as deps\n\t\tbo, err := buildOptions.toAPIBuildOptions(nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tbo.Services = project.ServiceNames()\n\t\tbo.Deps = !upOptions.noDeps\n\t\tbuild = &bo\n\t}\n\n\tcreate := api.CreateOptions{\n\t\tBuild:                build,\n\t\tServices:             services,\n\t\tRemoveOrphans:        createOptions.removeOrphans,\n\t\tIgnoreOrphans:        createOptions.ignoreOrphans,\n\t\tRecreate:             createOptions.recreateStrategy(),\n\t\tRecreateDependencies: createOptions.dependenciesRecreateStrategy(),\n\t\tInherit:              !createOptions.noInherit,\n\t\tTimeout:              createOptions.GetTimeout(),\n\t\tQuietPull:            createOptions.quietPull,\n\t}\n\n\tif createOptions.AssumeYes {\n\t\tbackendOptions.Options = append(backendOptions.Options, compose.WithPrompt(compose.AlwaysOkPrompt()))\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif upOptions.noStart {\n\t\treturn backend.Create(ctx, project, create)\n\t}\n\n\tvar consumer api.LogConsumer\n\tvar attach []string\n\tif !upOptions.Detach {\n\t\tconsumer = formatter.NewLogConsumer(ctx, dockerCli.Out(), dockerCli.Err(), !upOptions.noColor, !upOptions.noPrefix, upOptions.timestamp)\n\n\t\tvar attachSet utils.Set[string]\n\t\tif len(upOptions.attach) != 0 {\n\t\t\t// services are passed explicitly with --attach, verify they're valid and then use them as-is\n\t\t\tattachSet = utils.NewSet(upOptions.attach...)\n\t\t\tunexpectedSvcs := attachSet.Diff(utils.NewSet(project.ServiceNames()...))\n\t\t\tif len(unexpectedSvcs) != 0 {\n\t\t\t\treturn fmt.Errorf(\"cannot attach to services not included in up: %s\", strings.Join(unexpectedSvcs.Elements(), \", \"))\n\t\t\t}\n\t\t} else {\n\t\t\t// mark services being launched (and potentially their deps) for attach\n\t\t\t// if they didn't opt-out via Compose YAML\n\t\t\tattachSet = utils.NewSet[string]()\n\t\t\tvar dependencyOpt types.DependencyOption = types.IgnoreDependencies\n\t\t\tif upOptions.attachDependencies {\n\t\t\t\tdependencyOpt = types.IncludeDependencies\n\t\t\t}\n\t\t\tif err := project.ForEachService(services, func(serviceName string, s *types.ServiceConfig) error {\n\t\t\t\tif s.Attach == nil || *s.Attach {\n\t\t\t\t\tattachSet.Add(serviceName)\n\t\t\t\t}\n\t\t\t\treturn nil\n\t\t\t}, dependencyOpt); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\t// filter out any services that have been explicitly marked for ignore with `--no-attach`\n\t\tattachSet.RemoveAll(upOptions.noAttach...)\n\t\tattach = attachSet.Elements()\n\t}\n\n\tvar timeout time.Duration\n\tif upOptions.waitTimeout > 0 {\n\t\ttimeout = time.Duration(upOptions.waitTimeout) * time.Second\n\t}\n\treturn backend.Up(ctx, project, api.UpOptions{\n\t\tCreate: create,\n\t\tStart: api.StartOptions{\n\t\t\tProject:        project,\n\t\t\tAttach:         consumer,\n\t\t\tAttachTo:       attach,\n\t\t\tExitCodeFrom:   upOptions.exitCodeFrom,\n\t\t\tOnExit:         upOptions.OnExit(),\n\t\t\tWait:           upOptions.wait,\n\t\t\tWaitTimeout:    timeout,\n\t\t\tWatch:          upOptions.watch,\n\t\t\tServices:       services,\n\t\t\tNavigationMenu: upOptions.navigationMenu && display.Mode != \"plain\" && dockerCli.In().IsTerminal(),\n\t\t},\n\t})\n}\n\nfunc setServiceScale(project *types.Project, name string, replicas int) error {\n\tservice, err := project.GetService(name)\n\tif err != nil {\n\t\treturn err\n\t}\n\tservice.SetScale(replicas)\n\tproject.Services[name] = service\n\treturn nil\n}\n"
  },
  {
    "path": "cmd/compose/up_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"gotest.tools/v3/assert\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc TestApplyScaleOpt(t *testing.T) {\n\tp := types.Project{\n\t\tServices: types.Services{\n\t\t\t\"foo\": {\n\t\t\t\tName: \"foo\",\n\t\t\t},\n\t\t\t\"bar\": {\n\t\t\t\tName: \"bar\",\n\t\t\t\tDeploy: &types.DeployConfig{\n\t\t\t\t\tMode: \"test\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\terr := applyScaleOpts(&p, []string{\"foo=2\", \"bar=3\"})\n\tassert.NilError(t, err)\n\tfoo, err := p.GetService(\"foo\")\n\tassert.NilError(t, err)\n\tassert.Equal(t, *foo.Scale, 2)\n\n\tbar, err := p.GetService(\"bar\")\n\tassert.NilError(t, err)\n\tassert.Equal(t, *bar.Scale, 3)\n\tassert.Equal(t, *bar.Deploy.Replicas, 3)\n}\n\nfunc TestUpOptions_OnExit(t *testing.T) {\n\ttests := []struct {\n\t\tname string\n\t\targs upOptions\n\t\twant api.Cascade\n\t}{\n\t\t{\n\t\t\tname: \"no cascade\",\n\t\t\targs: upOptions{},\n\t\t\twant: api.CascadeIgnore,\n\t\t},\n\t\t{\n\t\t\tname: \"cascade stop\",\n\t\t\targs: upOptions{cascadeStop: true},\n\t\t\twant: api.CascadeStop,\n\t\t},\n\t\t{\n\t\t\tname: \"cascade fail\",\n\t\t\targs: upOptions{cascadeFail: true},\n\t\t\twant: api.CascadeFail,\n\t\t},\n\t\t{\n\t\t\tname: \"both set - stop takes precedence\",\n\t\t\targs: upOptions{\n\t\t\t\tcascadeStop: true,\n\t\t\t\tcascadeFail: true,\n\t\t\t},\n\t\t\twant: api.CascadeStop,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot := tt.args.OnExit()\n\t\t\tassert.Equal(t, got, tt.want)\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "cmd/compose/version.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/cmd/formatter\"\n\t\"github.com/docker/compose/v5/internal\"\n)\n\ntype versionOptions struct {\n\tformat string\n\tshort  bool\n}\n\nfunc versionCommand(dockerCli command.Cli) *cobra.Command {\n\topts := versionOptions{}\n\tcmd := &cobra.Command{\n\t\tUse:   \"version [OPTIONS]\",\n\t\tShort: \"Show the Docker Compose version information\",\n\t\tArgs:  cobra.NoArgs,\n\t\tRunE: func(cmd *cobra.Command, _ []string) error {\n\t\t\trunVersion(opts, dockerCli)\n\t\t\treturn nil\n\t\t},\n\t\tPersistentPreRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\t// overwrite parent PersistentPreRunE to avoid trying to load\n\t\t\t// compose file on version command if COMPOSE_FILE is set\n\t\t\treturn nil\n\t\t},\n\t}\n\t// define flags for backward compatibility with com.docker.cli\n\tflags := cmd.Flags()\n\tflags.StringVarP(&opts.format, \"format\", \"f\", \"\", \"Format the output. Values: [pretty | json]. (Default: pretty)\")\n\tflags.BoolVar(&opts.short, \"short\", false, \"Shows only Compose's version number\")\n\n\treturn cmd\n}\n\nfunc runVersion(opts versionOptions, dockerCli command.Cli) {\n\tif opts.short {\n\t\t_, _ = fmt.Fprintln(dockerCli.Out(), strings.TrimPrefix(internal.Version, \"v\"))\n\t\treturn\n\t}\n\tif opts.format == formatter.JSON {\n\t\t_, _ = fmt.Fprintf(dockerCli.Out(), \"{\\\"version\\\":%q}\\n\", internal.Version)\n\t\treturn\n\t}\n\t_, _ = fmt.Fprintln(dockerCli.Out(), \"Docker Compose version\", internal.Version)\n}\n"
  },
  {
    "path": "cmd/compose/version_test.go",
    "content": "/*\n   Copyright 2025 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"bytes\"\n\t\"testing\"\n\n\t\"github.com/docker/cli/cli/streams\"\n\t\"go.uber.org/mock/gomock\"\n\t\"gotest.tools/v3/assert\"\n\n\t\"github.com/docker/compose/v5/internal\"\n\t\"github.com/docker/compose/v5/pkg/mocks\"\n)\n\nfunc TestVersionCommand(t *testing.T) {\n\toriginalVersion := internal.Version\n\tdefer func() {\n\t\tinternal.Version = originalVersion\n\t}()\n\tinternal.Version = \"v9.9.9-test\"\n\n\ttests := []struct {\n\t\tname string\n\t\targs []string\n\t\twant string\n\t}{\n\t\t{\n\t\t\tname: \"default\",\n\t\t\targs: []string{},\n\t\t\twant: \"Docker Compose version v9.9.9-test\\n\",\n\t\t},\n\t\t{\n\t\t\tname: \"short flag\",\n\t\t\targs: []string{\"--short\"},\n\t\t\twant: \"9.9.9-test\\n\",\n\t\t},\n\t\t{\n\t\t\tname: \"json flag\",\n\t\t\targs: []string{\"--format\", \"json\"},\n\t\t\twant: `{\"version\":\"v9.9.9-test\"}` + \"\\n\",\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tctrl := gomock.NewController(t)\n\t\t\tdefer ctrl.Finish()\n\n\t\t\tbuf := new(bytes.Buffer)\n\t\t\tcli := mocks.NewMockCli(ctrl)\n\t\t\tcli.EXPECT().Out().Return(streams.NewOut(buf)).AnyTimes()\n\n\t\t\tcmd := versionCommand(cli)\n\t\t\tcmd.SetArgs(test.args)\n\t\t\terr := cmd.Execute()\n\t\t\tassert.NilError(t, err)\n\n\t\t\tassert.Equal(t, test.want, buf.String())\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "cmd/compose/viz.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype vizOptions struct {\n\t*ProjectOptions\n\tincludeNetworks  bool\n\tincludePorts     bool\n\tincludeImageName bool\n\tindentationStr   string\n}\n\nfunc vizCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := vizOptions{\n\t\tProjectOptions: p,\n\t}\n\tvar indentationSize int\n\tvar useSpaces bool\n\n\tcmd := &cobra.Command{\n\t\tUse:   \"viz [OPTIONS]\",\n\t\tShort: \"EXPERIMENTAL - Generate a graphviz graph from your compose file\",\n\t\tPreRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\tvar err error\n\t\t\topts.indentationStr, err = preferredIndentationStr(indentationSize, useSpaces)\n\t\t\treturn err\n\t\t}),\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runViz(ctx, dockerCli, backendOptions, &opts)\n\t\t}),\n\t}\n\n\tcmd.Flags().BoolVar(&opts.includePorts, \"ports\", false, \"Include service's exposed ports in output graph\")\n\tcmd.Flags().BoolVar(&opts.includeNetworks, \"networks\", false, \"Include service's attached networks in output graph\")\n\tcmd.Flags().BoolVar(&opts.includeImageName, \"image\", false, \"Include service's image name in output graph\")\n\tcmd.Flags().IntVar(&indentationSize, \"indentation-size\", 1, \"Number of tabs or spaces to use for indentation\")\n\tcmd.Flags().BoolVar(&useSpaces, \"spaces\", false, \"If given, space character ' ' will be used to indent,\\notherwise tab character '\\\\t' will be used\")\n\treturn cmd\n}\n\nfunc runViz(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts *vizOptions) error {\n\t_, _ = fmt.Fprintln(os.Stderr, \"viz command is EXPERIMENTAL\")\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, _, err := opts.ToProject(ctx, dockerCli, backend, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// build graph\n\tgraphStr, _ := backend.Viz(ctx, project, api.VizOptions{\n\t\tIncludeNetworks:  opts.includeNetworks,\n\t\tIncludePorts:     opts.includePorts,\n\t\tIncludeImageName: opts.includeImageName,\n\t\tIndentation:      opts.indentationStr,\n\t})\n\n\tfmt.Println(graphStr)\n\n\treturn nil\n}\n\n// preferredIndentationStr returns a single string given the indentation preference\nfunc preferredIndentationStr(size int, useSpace bool) (string, error) {\n\tif size < 0 {\n\t\treturn \"\", fmt.Errorf(\"invalid indentation size: %d\", size)\n\t}\n\n\tindentationStr := \"\\t\"\n\tif useSpace {\n\t\tindentationStr = \" \"\n\t}\n\treturn strings.Repeat(indentationStr, size), nil\n}\n"
  },
  {
    "path": "cmd/compose/viz_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc TestPreferredIndentationStr(t *testing.T) {\n\ttype args struct {\n\t\tsize     int\n\t\tuseSpace bool\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twant    string\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"should return '\\\\t\\\\t'\",\n\t\t\targs: args{\n\t\t\t\tsize:     2,\n\t\t\t\tuseSpace: false,\n\t\t\t},\n\t\t\twant:    \"\\t\\t\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"should return '    '\",\n\t\t\targs: args{\n\t\t\t\tsize:     4,\n\t\t\t\tuseSpace: true,\n\t\t\t},\n\t\t\twant:    \"    \",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"should return ''\",\n\t\t\targs: args{\n\t\t\t\tsize:     0,\n\t\t\t\tuseSpace: false,\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"should return ''\",\n\t\t\targs: args{\n\t\t\t\tsize:     0,\n\t\t\t\tuseSpace: true,\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"should throw error because indentation size < 0\",\n\t\t\targs: args{\n\t\t\t\tsize:     -1,\n\t\t\t\tuseSpace: false,\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\twantErr: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot, err := preferredIndentationStr(tt.args.size, tt.args.useSpace)\n\t\t\tif tt.wantErr {\n\t\t\t\trequire.Errorf(t, err, \"preferredIndentationStr(%v, %v)\", tt.args.size, tt.args.useSpace)\n\t\t\t} else {\n\t\t\t\trequire.NoError(t, err)\n\t\t\t\tassert.Equalf(t, tt.want, got, \"preferredIndentationStr(%v, %v)\", tt.args.size, tt.args.useSpace)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "cmd/compose/volumes.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"slices\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/docker/cli/cli/command/formatter\"\n\t\"github.com/docker/cli/cli/flags\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype volumesOptions struct {\n\t*ProjectOptions\n\tQuiet  bool\n\tFormat string\n}\n\nfunc volumesCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\toptions := volumesOptions{\n\t\tProjectOptions: p,\n\t}\n\n\tcmd := &cobra.Command{\n\t\tUse:   \"volumes [OPTIONS] [SERVICE...]\",\n\t\tShort: \"List volumes\",\n\t\tRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn runVol(ctx, dockerCli, backendOptions, args, options)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\n\tcmd.Flags().BoolVarP(&options.Quiet, \"quiet\", \"q\", false, \"Only display volume names\")\n\tcmd.Flags().StringVar(&options.Format, \"format\", \"table\", flags.FormatHelp)\n\n\treturn cmd\n}\n\nfunc runVol(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, services []string, options volumesOptions) error {\n\tproject, name, err := options.projectOrName(ctx, dockerCli, services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif project != nil {\n\t\tnames := project.ServiceNames()\n\t\tfor _, service := range services {\n\t\t\tif !slices.Contains(names, service) {\n\t\t\t\treturn fmt.Errorf(\"no such service: %s\", service)\n\t\t\t}\n\t\t}\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\tvolumes, err := backend.Volumes(ctx, name, api.VolumesOptions{\n\t\tServices: services,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif options.Quiet {\n\t\tfor _, v := range volumes {\n\t\t\t_, _ = fmt.Fprintln(dockerCli.Out(), v.Name)\n\t\t}\n\t\treturn nil\n\t}\n\n\tvolumeCtx := formatter.Context{\n\t\tOutput: dockerCli.Out(),\n\t\tFormat: formatter.NewVolumeFormat(options.Format, options.Quiet),\n\t}\n\n\treturn formatter.VolumeWrite(volumeCtx, volumes)\n}\n"
  },
  {
    "path": "cmd/compose/wait.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"os\"\n\n\t\"github.com/docker/cli/cli\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype waitOptions struct {\n\t*ProjectOptions\n\n\tservices []string\n\n\tdownProject bool\n}\n\nfunc waitCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\topts := waitOptions{\n\t\tProjectOptions: p,\n\t}\n\n\tvar statusCode int64\n\tvar err error\n\tcmd := &cobra.Command{\n\t\tUse:   \"wait SERVICE [SERVICE...] [OPTIONS]\",\n\t\tShort: \"Block until containers of all (or specified) services stop.\",\n\t\tArgs:  cli.RequiresMinArgs(1),\n\t\tRunE: Adapt(func(ctx context.Context, services []string) error {\n\t\t\topts.services = services\n\t\t\tstatusCode, err = runWait(ctx, dockerCli, backendOptions, &opts)\n\t\t\treturn err\n\t\t}),\n\t\tPostRun: func(cmd *cobra.Command, args []string) {\n\t\t\tos.Exit(int(statusCode))\n\t\t},\n\t}\n\n\tcmd.Flags().BoolVar(&opts.downProject, \"down-project\", false, \"Drops project when the first container stops\")\n\n\treturn cmd\n}\n\nfunc runWait(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, opts *waitOptions) (int64, error) {\n\t_, name, err := opts.projectOrName(ctx, dockerCli)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\treturn backend.Wait(ctx, name, api.WaitOptions{\n\t\tServices:                   opts.services,\n\t\tDownProjectOnContainerExit: opts.downProject,\n\t})\n}\n"
  },
  {
    "path": "cmd/compose/watch.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/cmd/formatter\"\n\t\"github.com/docker/compose/v5/internal/locker\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\ntype watchOptions struct {\n\t*ProjectOptions\n\tprune bool\n\tnoUp  bool\n}\n\nfunc watchCommand(p *ProjectOptions, dockerCli command.Cli, backendOptions *BackendOptions) *cobra.Command {\n\twatchOpts := watchOptions{\n\t\tProjectOptions: p,\n\t}\n\tbuildOpts := buildOptions{\n\t\tProjectOptions: p,\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:   \"watch [SERVICE...]\",\n\t\tShort: \"Watch build context for service and rebuild/refresh containers when files are updated\",\n\t\tPreRunE: Adapt(func(ctx context.Context, args []string) error {\n\t\t\treturn nil\n\t\t}),\n\t\tRunE: AdaptCmd(func(ctx context.Context, cmd *cobra.Command, args []string) error {\n\t\t\tif cmd.Parent().Name() == \"alpha\" {\n\t\t\t\tlogrus.Warn(\"watch command is now available as a top level command\")\n\t\t\t}\n\t\t\treturn runWatch(ctx, dockerCli, backendOptions, watchOpts, buildOpts, args)\n\t\t}),\n\t\tValidArgsFunction: completeServiceNames(dockerCli, p),\n\t}\n\n\tcmd.Flags().BoolVar(&buildOpts.quiet, \"quiet\", false, \"hide build output\")\n\tcmd.Flags().BoolVar(&watchOpts.prune, \"prune\", true, \"Prune dangling images on rebuild\")\n\tcmd.Flags().BoolVar(&watchOpts.noUp, \"no-up\", false, \"Do not build & start services before watching\")\n\treturn cmd\n}\n\nfunc runWatch(ctx context.Context, dockerCli command.Cli, backendOptions *BackendOptions, watchOpts watchOptions, buildOpts buildOptions, services []string) error {\n\tbackend, err := compose.NewComposeService(dockerCli, backendOptions.Options...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject, _, err := watchOpts.ToProject(ctx, dockerCli, backend, services)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif err := applyPlatforms(project, true); err != nil {\n\t\treturn err\n\t}\n\n\tbuild, err := buildOpts.toAPIBuildOptions(nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// validation done -- ensure we have the lockfile for this project before doing work\n\tl, err := locker.NewPidfile(project.Name)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"cannot take exclusive lock for project %q: %w\", project.Name, err)\n\t}\n\tif err := l.Lock(); err != nil {\n\t\treturn fmt.Errorf(\"cannot take exclusive lock for project %q: %w\", project.Name, err)\n\t}\n\n\tif !watchOpts.noUp {\n\t\tfor index, service := range project.Services {\n\t\t\tif service.Build != nil && service.Develop != nil {\n\t\t\t\tservice.PullPolicy = types.PullPolicyBuild\n\t\t\t}\n\t\t\tproject.Services[index] = service\n\t\t}\n\t\tupOpts := api.UpOptions{\n\t\t\tCreate: api.CreateOptions{\n\t\t\t\tBuild:                &build,\n\t\t\t\tServices:             services,\n\t\t\t\tRemoveOrphans:        false,\n\t\t\t\tRecreate:             api.RecreateDiverged,\n\t\t\t\tRecreateDependencies: api.RecreateNever,\n\t\t\t\tInherit:              true,\n\t\t\t\tQuietPull:            buildOpts.quiet,\n\t\t\t},\n\t\t\tStart: api.StartOptions{\n\t\t\t\tProject:  project,\n\t\t\t\tAttach:   nil,\n\t\t\t\tServices: services,\n\t\t\t},\n\t\t}\n\t\tif err := backend.Up(ctx, project, upOpts); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tconsumer := formatter.NewLogConsumer(ctx, dockerCli.Out(), dockerCli.Err(), false, false, false)\n\treturn backend.Watch(ctx, project, api.WatchOptions{\n\t\tBuild:    &build,\n\t\tLogTo:    consumer,\n\t\tPrune:    watchOpts.prune,\n\t\tServices: services,\n\t})\n}\n"
  },
  {
    "path": "cmd/display/colors.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage display\n\nimport (\n\t\"github.com/morikuni/aec\"\n)\n\ntype colorFunc func(string) string\n\nvar (\n\tnocolor colorFunc = func(s string) string {\n\t\treturn s\n\t}\n\n\tDoneColor    colorFunc = aec.BlueF.Apply\n\tTimerColor   colorFunc = aec.BlueF.Apply\n\tCountColor   colorFunc = aec.YellowF.Apply\n\tWarningColor colorFunc = aec.YellowF.With(aec.Bold).Apply\n\tSuccessColor colorFunc = aec.GreenF.Apply\n\tErrorColor   colorFunc = aec.RedF.With(aec.Bold).Apply\n\tPrefixColor  colorFunc = aec.CyanF.Apply\n)\n\nfunc NoColor() {\n\tDoneColor = nocolor\n\tTimerColor = nocolor\n\tCountColor = nocolor\n\tWarningColor = nocolor\n\tSuccessColor = nocolor\n\tErrorColor = nocolor\n\tPrefixColor = nocolor\n}\n"
  },
  {
    "path": "cmd/display/dryrun.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage display\n\nconst (\n\tDRYRUN_PREFIX = \" DRY-RUN MODE - \"\n)\n"
  },
  {
    "path": "cmd/display/json.go",
    "content": "/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage display\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc JSON(out io.Writer) api.EventProcessor {\n\treturn &jsonWriter{\n\t\tout: out,\n\t}\n}\n\ntype jsonWriter struct {\n\tout    io.Writer\n\tdryRun bool\n}\n\ntype jsonMessage struct {\n\tDryRun   bool   `json:\"dry-run,omitempty\"`\n\tTail     bool   `json:\"tail,omitempty\"`\n\tID       string `json:\"id,omitempty\"`\n\tParentID string `json:\"parent_id,omitempty\"`\n\tStatus   string `json:\"status,omitempty\"`\n\tText     string `json:\"text,omitempty\"`\n\tDetails  string `json:\"details,omitempty\"`\n\tCurrent  int64  `json:\"current,omitempty\"`\n\tTotal    int64  `json:\"total,omitempty\"`\n\tPercent  int    `json:\"percent,omitempty\"`\n}\n\nfunc (p *jsonWriter) Start(ctx context.Context, operation string) {\n}\n\nfunc (p *jsonWriter) Event(e api.Resource) {\n\tmessage := &jsonMessage{\n\t\tDryRun:   p.dryRun,\n\t\tTail:     false,\n\t\tID:       e.ID,\n\t\tStatus:   e.StatusText(),\n\t\tText:     e.Text,\n\t\tDetails:  e.Details,\n\t\tParentID: e.ParentID,\n\t\tCurrent:  e.Current,\n\t\tTotal:    e.Total,\n\t\tPercent:  e.Percent,\n\t}\n\tmarshal, err := json.Marshal(message)\n\tif err == nil {\n\t\t_, _ = fmt.Fprintln(p.out, string(marshal))\n\t}\n}\n\nfunc (p *jsonWriter) On(events ...api.Resource) {\n\tfor _, e := range events {\n\t\tp.Event(e)\n\t}\n}\n\nfunc (p *jsonWriter) Done(_ string, _ bool) {\n}\n"
  },
  {
    "path": "cmd/display/json_test.go",
    "content": "/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage display\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc TestJsonWriter_Event(t *testing.T) {\n\tvar out bytes.Buffer\n\tw := &jsonWriter{\n\t\tout:    &out,\n\t\tdryRun: true,\n\t}\n\n\tevent := api.Resource{\n\t\tID:       \"service1\",\n\t\tParentID: \"project\",\n\t\tStatus:   api.Working,\n\t\tText:     api.StatusCreating,\n\t\tCurrent:  50,\n\t\tTotal:    100,\n\t\tPercent:  50,\n\t}\n\tw.Event(event)\n\n\tvar actual jsonMessage\n\terr := json.Unmarshal(out.Bytes(), &actual)\n\tassert.NilError(t, err)\n\n\texpected := jsonMessage{\n\t\tDryRun:   true,\n\t\tID:       event.ID,\n\t\tParentID: event.ParentID,\n\t\tText:     api.StatusCreating,\n\t\tStatus:   \"Working\",\n\t\tCurrent:  event.Current,\n\t\tTotal:    event.Total,\n\t\tPercent:  event.Percent,\n\t}\n\tassert.DeepEqual(t, expected, actual)\n}\n"
  },
  {
    "path": "cmd/display/mode.go",
    "content": "/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage display\n\n// Mode define how progress should be rendered, either as ModePlain or ModeTTY\nvar Mode = ModeAuto\n\nconst (\n\t// ModeAuto detect console capabilities\n\tModeAuto = \"auto\"\n\t// ModeTTY use terminal capability for advanced rendering\n\tModeTTY = \"tty\"\n\t// ModePlain dump raw events to output\n\tModePlain = \"plain\"\n\t// ModeQuiet don't display events\n\tModeQuiet = \"quiet\"\n\t// ModeJSON outputs a machine-readable JSON stream\n\tModeJSON = \"json\"\n)\n"
  },
  {
    "path": "cmd/display/plain.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage display\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc Plain(out io.Writer) api.EventProcessor {\n\treturn &plainWriter{\n\t\tout: out,\n\t}\n}\n\ntype plainWriter struct {\n\tout    io.Writer\n\tdryRun bool\n}\n\nfunc (p *plainWriter) Start(ctx context.Context, operation string) {\n}\n\nfunc (p *plainWriter) Event(e api.Resource) {\n\tprefix := \"\"\n\tif p.dryRun {\n\t\tprefix = DRYRUN_PREFIX\n\t}\n\t_, _ = fmt.Fprintln(p.out, prefix, e.ID, e.Text, e.Details)\n}\n\nfunc (p *plainWriter) On(events ...api.Resource) {\n\tfor _, e := range events {\n\t\tp.Event(e)\n\t}\n}\n\nfunc (p *plainWriter) Done(_ string, _ bool) {\n}\n"
  },
  {
    "path": "cmd/display/quiet.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage display\n\nimport (\n\t\"context\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc Quiet() api.EventProcessor {\n\treturn &quiet{}\n}\n\ntype quiet struct{}\n\nfunc (q *quiet) Start(_ context.Context, _ string) {\n}\n\nfunc (q *quiet) Done(_ string, _ bool) {\n}\n\nfunc (q *quiet) On(_ ...api.Resource) {\n}\n"
  },
  {
    "path": "cmd/display/spinner.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage display\n\nimport (\n\t\"runtime\"\n\t\"time\"\n)\n\ntype Spinner struct {\n\ttime  time.Time\n\tindex int\n\tchars []string\n\tstop  bool\n\tdone  string\n}\n\nfunc NewSpinner() *Spinner {\n\tchars := []string{\n\t\t\"⠋\", \"⠙\", \"⠹\", \"⠸\", \"⠼\", \"⠴\", \"⠦\", \"⠧\", \"⠇\", \"⠏\",\n\t}\n\tdone := \"⠿\"\n\n\tif runtime.GOOS == \"windows\" {\n\t\tchars = []string{\"-\"}\n\t\tdone = \"-\"\n\t}\n\n\treturn &Spinner{\n\t\tindex: 0,\n\t\ttime:  time.Now(),\n\t\tchars: chars,\n\t\tdone:  done,\n\t}\n}\n\nfunc (s *Spinner) String() string {\n\tif s.stop {\n\t\treturn s.done\n\t}\n\n\td := time.Since(s.time)\n\tif d.Milliseconds() > 100 {\n\t\ts.index = (s.index + 1) % len(s.chars)\n\t}\n\n\treturn s.chars[s.index]\n}\n\nfunc (s *Spinner) Stop() {\n\ts.stop = true\n}\n\nfunc (s *Spinner) Restart() {\n\ts.stop = false\n}\n"
  },
  {
    "path": "cmd/display/tty.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage display\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"iter\"\n\t\"slices\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\t\"unicode/utf8\"\n\n\t\"github.com/buger/goterm\"\n\t\"github.com/docker/go-units\"\n\t\"github.com/morikuni/aec\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\n// Full creates an EventProcessor that render advanced UI within a terminal.\n// On Start, TUI lists task with a progress timer\nfunc Full(out io.Writer, info io.Writer, detached bool) api.EventProcessor {\n\treturn &ttyWriter{\n\t\tout:      out,\n\t\tinfo:     info,\n\t\ttasks:    map[string]*task{},\n\t\tdone:     make(chan bool),\n\t\tmtx:      &sync.Mutex{},\n\t\tdetached: detached,\n\t}\n}\n\ntype ttyWriter struct {\n\tout       io.Writer\n\tids       []string // tasks ids ordered as first event appeared\n\ttasks     map[string]*task\n\trepeated  bool\n\tnumLines  int\n\tdone      chan bool\n\tmtx       *sync.Mutex\n\tdryRun    bool // FIXME(ndeloof) (re)implement support for dry-run\n\toperation string\n\tticker    *time.Ticker\n\tsuspended bool\n\tinfo      io.Writer\n\tdetached  bool\n}\n\ntype task struct {\n\tID        string\n\tparent    string            // the resource this task receives updates from - other parents will be ignored\n\tparents   utils.Set[string] // all resources to depend on this task\n\tstartTime time.Time\n\tendTime   time.Time\n\ttext      string\n\tdetails   string\n\tstatus    api.EventStatus\n\tcurrent   int64\n\tpercent   int\n\ttotal     int64\n\tspinner   *Spinner\n}\n\nfunc newTask(e api.Resource) task {\n\tt := task{\n\t\tID:        e.ID,\n\t\tparents:   utils.NewSet[string](),\n\t\tstartTime: time.Now(),\n\t\ttext:      e.Text,\n\t\tdetails:   e.Details,\n\t\tstatus:    e.Status,\n\t\tcurrent:   e.Current,\n\t\tpercent:   e.Percent,\n\t\ttotal:     e.Total,\n\t\tspinner:   NewSpinner(),\n\t}\n\tif e.ParentID != \"\" {\n\t\tt.parent = e.ParentID\n\t\tt.parents.Add(e.ParentID)\n\t}\n\tif e.Status == api.Done || e.Status == api.Error {\n\t\tt.stop()\n\t}\n\treturn t\n}\n\n// update adjusts task state based on last received event\nfunc (t *task) update(e api.Resource) {\n\tif e.ParentID != \"\" {\n\t\tt.parents.Add(e.ParentID)\n\t\t// we may receive same event from distinct parents (typically: images sharing layers)\n\t\t// to avoid status to flicker, only accept updates from our first declared parent\n\t\tif t.parent != e.ParentID {\n\t\t\treturn\n\t\t}\n\t}\n\n\t// update task based on received event\n\tswitch e.Status {\n\tcase api.Done, api.Error, api.Warning:\n\t\tif t.status != e.Status {\n\t\t\tt.stop()\n\t\t}\n\tcase api.Working:\n\t\tt.hasMore()\n\t}\n\tt.status = e.Status\n\tt.text = e.Text\n\tt.details = e.Details\n\t// progress can only go up\n\tif e.Total > t.total {\n\t\tt.total = e.Total\n\t}\n\tif e.Current > t.current {\n\t\tt.current = e.Current\n\t}\n\tif e.Percent > t.percent {\n\t\tt.percent = e.Percent\n\t}\n}\n\nfunc (t *task) stop() {\n\tt.endTime = time.Now()\n\tt.spinner.Stop()\n}\n\nfunc (t *task) hasMore() {\n\tt.spinner.Restart()\n}\n\nfunc (t *task) Completed() bool {\n\tswitch t.status {\n\tcase api.Done, api.Error, api.Warning:\n\t\treturn true\n\tdefault:\n\t\treturn false\n\t}\n}\n\nfunc (w *ttyWriter) Start(ctx context.Context, operation string) {\n\tw.ticker = time.NewTicker(100 * time.Millisecond)\n\tw.operation = operation\n\tgo func() {\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-ctx.Done():\n\t\t\t\t// interrupted\n\t\t\t\tw.ticker.Stop()\n\t\t\t\treturn\n\t\t\tcase <-w.done:\n\t\t\t\treturn\n\t\t\tcase <-w.ticker.C:\n\t\t\t\tw.print()\n\t\t\t}\n\t\t}\n\t}()\n}\n\nfunc (w *ttyWriter) Done(operation string, success bool) {\n\tw.print()\n\tw.done <- true\n\tw.mtx.Lock()\n\tdefer w.mtx.Unlock()\n\tif w.ticker != nil {\n\t\tw.ticker.Stop()\n\t}\n\tw.operation = \"\"\n}\n\nfunc (w *ttyWriter) On(events ...api.Resource) {\n\tw.mtx.Lock()\n\tdefer w.mtx.Unlock()\n\tfor _, e := range events {\n\t\tif e.ID == \"Compose\" {\n\t\t\t_, _ = fmt.Fprintln(w.info, ErrorColor(e.Details))\n\t\t\tcontinue\n\t\t}\n\n\t\tif w.operation != \"start\" && (e.Text == api.StatusStarted || e.Text == api.StatusStarting) && !w.detached {\n\t\t\t// skip those events to avoid mix with container logs\n\t\t\tcontinue\n\t\t}\n\t\tw.event(e)\n\t}\n}\n\nfunc (w *ttyWriter) event(e api.Resource) {\n\t// Suspend print while a build is in progress, to avoid collision with buildkit Display\n\tif w.ticker != nil {\n\t\tif e.Text == api.StatusBuilding {\n\t\t\tw.ticker.Stop()\n\t\t\tw.suspended = true\n\t\t} else if w.suspended {\n\t\t\tw.ticker.Reset(100 * time.Millisecond)\n\t\t\tw.suspended = false\n\t\t}\n\t}\n\n\tif last, ok := w.tasks[e.ID]; ok {\n\t\tlast.update(e)\n\t} else {\n\t\tt := newTask(e)\n\t\tw.tasks[e.ID] = &t\n\t\tw.ids = append(w.ids, e.ID)\n\t}\n\tw.printEvent(e)\n}\n\nfunc (w *ttyWriter) printEvent(e api.Resource) {\n\tif w.operation != \"\" {\n\t\t// event will be displayed by progress UI on ticker's ticks\n\t\treturn\n\t}\n\n\tvar color colorFunc\n\tswitch e.Status {\n\tcase api.Working:\n\t\tcolor = SuccessColor\n\tcase api.Done:\n\t\tcolor = SuccessColor\n\tcase api.Warning:\n\t\tcolor = WarningColor\n\tcase api.Error:\n\t\tcolor = ErrorColor\n\t}\n\t_, _ = fmt.Fprintf(w.out, \"%s %s %s\\n\", e.ID, color(e.Text), e.Details)\n}\n\nfunc (w *ttyWriter) parentTasks() iter.Seq[*task] {\n\treturn func(yield func(*task) bool) {\n\t\tfor _, id := range w.ids { // iterate on ids to enforce a consistent order\n\t\t\tt := w.tasks[id]\n\t\t\tif len(t.parents) == 0 {\n\t\t\t\tyield(t)\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc (w *ttyWriter) childrenTasks(parent string) iter.Seq[*task] {\n\treturn func(yield func(*task) bool) {\n\t\tfor _, id := range w.ids { // iterate on ids to enforce a consistent order\n\t\t\tt := w.tasks[id]\n\t\t\tif t.parents.Has(parent) {\n\t\t\t\tyield(t)\n\t\t\t}\n\t\t}\n\t}\n}\n\n// lineData holds pre-computed formatting for a task line\ntype lineData struct {\n\tspinner     string // rendered spinner with color\n\tprefix      string // dry-run prefix if any\n\ttaskID      string // possibly abbreviated\n\tprogress    string // progress bar and size info\n\tstatus      string // rendered status with color\n\tdetails     string // possibly abbreviated\n\ttimer       string // rendered timer with color\n\tstatusPad   int    // padding before status to align\n\ttimerPad    int    // padding before timer to align\n\tstatusColor colorFunc\n}\n\nfunc (w *ttyWriter) print() {\n\tterminalWidth := goterm.Width()\n\tterminalHeight := goterm.Height()\n\tif terminalWidth <= 0 {\n\t\tterminalWidth = 80\n\t}\n\tif terminalHeight <= 0 {\n\t\tterminalHeight = 24\n\t}\n\tw.printWithDimensions(terminalWidth, terminalHeight)\n}\n\nfunc (w *ttyWriter) printWithDimensions(terminalWidth, terminalHeight int) {\n\tw.mtx.Lock()\n\tdefer w.mtx.Unlock()\n\tif len(w.tasks) == 0 {\n\t\treturn\n\t}\n\n\tup := w.numLines + 1\n\tif !w.repeated {\n\t\tup--\n\t\tw.repeated = true\n\t}\n\tb := aec.NewBuilder(\n\t\taec.Hide, // Hide the cursor while we are printing\n\t\taec.Up(uint(up)),\n\t\taec.Column(0),\n\t)\n\t_, _ = fmt.Fprint(w.out, b.ANSI)\n\tdefer func() {\n\t\t_, _ = fmt.Fprint(w.out, aec.Show)\n\t}()\n\n\tfirstLine := fmt.Sprintf(\"[+] %s %d/%d\", w.operation, numDone(w.tasks), len(w.tasks))\n\t_, _ = fmt.Fprintln(w.out, firstLine)\n\n\t// Collect parent tasks in original order\n\tallTasks := slices.Collect(w.parentTasks())\n\n\t// Available lines: terminal height - 2 (header line + potential \"more\" line)\n\tmaxLines := max(terminalHeight-2, 1)\n\n\tshowMore := len(allTasks) > maxLines\n\ttasksToShow := allTasks\n\tif showMore {\n\t\ttasksToShow = allTasks[:maxLines-1] // Reserve one line for \"more\" message\n\t}\n\n\t// collect line data and compute timerLen\n\tlines := make([]lineData, len(tasksToShow))\n\tvar timerLen int\n\tfor i, t := range tasksToShow {\n\t\tlines[i] = w.prepareLineData(t)\n\t\tif len(lines[i].timer) > timerLen {\n\t\t\ttimerLen = len(lines[i].timer)\n\t\t}\n\t}\n\n\t// shorten details/taskID to fit terminal width\n\tw.adjustLineWidth(lines, timerLen, terminalWidth)\n\n\t// compute padding\n\tw.applyPadding(lines, terminalWidth, timerLen)\n\n\t// Render lines\n\tnumLines := 0\n\tfor _, l := range lines {\n\t\t_, _ = fmt.Fprint(w.out, lineText(l))\n\t\tnumLines++\n\t}\n\n\tif showMore {\n\t\tmoreCount := len(allTasks) - len(tasksToShow)\n\t\tmoreText := fmt.Sprintf(\" ... %d more\", moreCount)\n\t\tpad := max(terminalWidth-len(moreText), 0)\n\t\t_, _ = fmt.Fprintf(w.out, \"%s%s\\n\", moreText, strings.Repeat(\" \", pad))\n\t\tnumLines++\n\t}\n\n\t// Clear any remaining lines from previous render\n\tfor i := numLines; i < w.numLines; i++ {\n\t\t_, _ = fmt.Fprintln(w.out, strings.Repeat(\" \", terminalWidth))\n\t\tnumLines++\n\t}\n\tw.numLines = numLines\n}\n\nfunc (w *ttyWriter) applyPadding(lines []lineData, terminalWidth int, timerLen int) {\n\tvar maxBeforeStatus int\n\tfor i := range lines {\n\t\tl := &lines[i]\n\t\t// Width before statusPad: space(1) + spinner(1) + prefix + space(1) + taskID + progress\n\t\tbeforeStatus := 3 + lenAnsi(l.prefix) + utf8.RuneCountInString(l.taskID) + lenAnsi(l.progress)\n\t\tif beforeStatus > maxBeforeStatus {\n\t\t\tmaxBeforeStatus = beforeStatus\n\t\t}\n\t}\n\n\tfor i, l := range lines {\n\t\t// Position before statusPad: space(1) + spinner(1) + prefix + space(1) + taskID + progress\n\t\tbeforeStatus := 3 + lenAnsi(l.prefix) + utf8.RuneCountInString(l.taskID) + lenAnsi(l.progress)\n\t\t// statusPad aligns status; lineText adds 1 more space after statusPad\n\t\tl.statusPad = maxBeforeStatus - beforeStatus\n\n\t\t// Format: beforeStatus + statusPad + space(1) + status\n\t\tlineLen := beforeStatus + l.statusPad + 1 + utf8.RuneCountInString(l.status)\n\t\tif l.details != \"\" {\n\t\t\tlineLen += 1 + utf8.RuneCountInString(l.details)\n\t\t}\n\t\tl.timerPad = max(terminalWidth-lineLen-timerLen, 1)\n\t\tlines[i] = l\n\n\t}\n}\n\nfunc (w *ttyWriter) adjustLineWidth(lines []lineData, timerLen int, terminalWidth int) {\n\tconst minIDLen = 10\n\tmaxStatusLen := maxStatusLength(lines)\n\n\t// Iteratively truncate until all lines fit\n\tfor range 100 { // safety limit\n\t\tmaxBeforeStatus := maxBeforeStatusWidth(lines)\n\t\toverflow := computeOverflow(lines, maxBeforeStatus, maxStatusLen, timerLen, terminalWidth)\n\n\t\tif overflow <= 0 {\n\t\t\tbreak\n\t\t}\n\n\t\t// First try to truncate details, then taskID\n\t\tif !truncateDetails(lines, overflow) && !truncateLongestTaskID(lines, overflow, minIDLen) {\n\t\t\tbreak // Can't truncate further\n\t\t}\n\t}\n}\n\n// maxStatusLength returns the maximum status text length across all lines.\nfunc maxStatusLength(lines []lineData) int {\n\tvar maxLen int\n\tfor i := range lines {\n\t\tif len(lines[i].status) > maxLen {\n\t\t\tmaxLen = len(lines[i].status)\n\t\t}\n\t}\n\treturn maxLen\n}\n\n// maxBeforeStatusWidth computes the maximum width before statusPad across all lines.\n// This is: space(1) + spinner(1) + prefix + space(1) + taskID + progress\nfunc maxBeforeStatusWidth(lines []lineData) int {\n\tvar maxWidth int\n\tfor i := range lines {\n\t\tl := &lines[i]\n\t\twidth := 3 + lenAnsi(l.prefix) + len(l.taskID) + lenAnsi(l.progress)\n\t\tif width > maxWidth {\n\t\t\tmaxWidth = width\n\t\t}\n\t}\n\treturn maxWidth\n}\n\n// computeOverflow calculates how many characters the widest line exceeds the terminal width.\n// Returns 0 or negative if all lines fit.\nfunc computeOverflow(lines []lineData, maxBeforeStatus, maxStatusLen, timerLen, terminalWidth int) int {\n\tvar maxOverflow int\n\tfor i := range lines {\n\t\tl := &lines[i]\n\t\tdetailsLen := len(l.details)\n\t\tif detailsLen > 0 {\n\t\t\tdetailsLen++ // space before details\n\t\t}\n\t\t// Line width: maxBeforeStatus + space(1) + status + details + minTimerPad(1) + timer\n\t\tlineWidth := maxBeforeStatus + 1 + maxStatusLen + detailsLen + 1 + timerLen\n\t\toverflow := lineWidth - terminalWidth\n\t\tif overflow > maxOverflow {\n\t\t\tmaxOverflow = overflow\n\t\t}\n\t}\n\treturn maxOverflow\n}\n\n// truncateDetails tries to truncate the first line's details to reduce overflow.\n// Returns true if any truncation was performed.\nfunc truncateDetails(lines []lineData, overflow int) bool {\n\tfor i := range lines {\n\t\tl := &lines[i]\n\t\tif len(l.details) > 3 {\n\t\t\treduction := min(overflow, len(l.details)-3)\n\t\t\tl.details = l.details[:len(l.details)-reduction-3] + \"...\"\n\t\t\treturn true\n\t\t} else if l.details != \"\" {\n\t\t\tl.details = \"\"\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\n// truncateLongestTaskID truncates the longest taskID to reduce overflow.\n// Returns true if truncation was performed.\nfunc truncateLongestTaskID(lines []lineData, overflow, minIDLen int) bool {\n\tlongestIdx := -1\n\tlongestLen := minIDLen\n\tfor i := range lines {\n\t\tif len(lines[i].taskID) > longestLen {\n\t\t\tlongestLen = len(lines[i].taskID)\n\t\t\tlongestIdx = i\n\t\t}\n\t}\n\n\tif longestIdx < 0 {\n\t\treturn false\n\t}\n\n\tl := &lines[longestIdx]\n\treduction := overflow + 3 // account for \"...\"\n\tnewLen := max(len(l.taskID)-reduction, minIDLen-3)\n\tif newLen > 0 {\n\t\tl.taskID = l.taskID[:newLen] + \"...\"\n\t}\n\treturn true\n}\n\nfunc (w *ttyWriter) prepareLineData(t *task) lineData {\n\tendTime := time.Now()\n\tif t.status != api.Working {\n\t\tendTime = t.startTime\n\t\tif (t.endTime != time.Time{}) {\n\t\t\tendTime = t.endTime\n\t\t}\n\t}\n\n\tprefix := \"\"\n\tif w.dryRun {\n\t\tprefix = PrefixColor(DRYRUN_PREFIX)\n\t}\n\n\telapsed := endTime.Sub(t.startTime).Seconds()\n\n\tvar (\n\t\thideDetails bool\n\t\ttotal       int64\n\t\tcurrent     int64\n\t\tcompletion  []string\n\t)\n\n\t// only show the aggregated progress while the root operation is in-progress\n\tif t.status == api.Working {\n\t\tfor child := range w.childrenTasks(t.ID) {\n\t\t\tif child.status == api.Working && child.total == 0 {\n\t\t\t\thideDetails = true\n\t\t\t}\n\t\t\ttotal += child.total\n\t\t\tcurrent += child.current\n\t\t\tr := len(percentChars) - 1\n\t\t\tp := min(child.percent, 100)\n\t\t\tcompletion = append(completion, percentChars[r*p/100])\n\t\t}\n\t}\n\n\tif total == 0 {\n\t\thideDetails = true\n\t}\n\n\tvar progress string\n\tif len(completion) > 0 {\n\t\tprogress = \" [\" + SuccessColor(strings.Join(completion, \"\")) + \"]\"\n\t\tif !hideDetails {\n\t\t\tprogress += fmt.Sprintf(\" %7s / %-7s\", units.HumanSize(float64(current)), units.HumanSize(float64(total)))\n\t\t}\n\t}\n\n\treturn lineData{\n\t\tspinner:     spinner(t),\n\t\tprefix:      prefix,\n\t\ttaskID:      t.ID,\n\t\tprogress:    progress,\n\t\tstatus:      t.text,\n\t\tstatusColor: colorFn(t.status),\n\t\tdetails:     t.details,\n\t\ttimer:       fmt.Sprintf(\"%.1fs\", elapsed),\n\t}\n}\n\nfunc lineText(l lineData) string {\n\tvar sb strings.Builder\n\tsb.WriteString(\" \")\n\tsb.WriteString(l.spinner)\n\tsb.WriteString(l.prefix)\n\tsb.WriteString(\" \")\n\tsb.WriteString(l.taskID)\n\tsb.WriteString(l.progress)\n\tsb.WriteString(strings.Repeat(\" \", l.statusPad))\n\tsb.WriteString(\" \")\n\tsb.WriteString(l.statusColor(l.status))\n\tif l.details != \"\" {\n\t\tsb.WriteString(\" \")\n\t\tsb.WriteString(l.details)\n\t}\n\tsb.WriteString(strings.Repeat(\" \", l.timerPad))\n\tsb.WriteString(TimerColor(l.timer))\n\tsb.WriteString(\"\\n\")\n\treturn sb.String()\n}\n\nvar (\n\tspinnerDone    = \"✔\"\n\tspinnerWarning = \"!\"\n\tspinnerError   = \"✘\"\n)\n\nfunc spinner(t *task) string {\n\tswitch t.status {\n\tcase api.Done:\n\t\treturn SuccessColor(spinnerDone)\n\tcase api.Warning:\n\t\treturn WarningColor(spinnerWarning)\n\tcase api.Error:\n\t\treturn ErrorColor(spinnerError)\n\tdefault:\n\t\treturn CountColor(t.spinner.String())\n\t}\n}\n\nfunc colorFn(s api.EventStatus) colorFunc {\n\tswitch s {\n\tcase api.Done:\n\t\treturn SuccessColor\n\tcase api.Warning:\n\t\treturn WarningColor\n\tcase api.Error:\n\t\treturn ErrorColor\n\tdefault:\n\t\treturn nocolor\n\t}\n}\n\nfunc numDone(tasks map[string]*task) int {\n\ti := 0\n\tfor _, t := range tasks {\n\t\tif t.status != api.Working {\n\t\t\ti++\n\t\t}\n\t}\n\treturn i\n}\n\n// lenAnsi count of user-perceived characters in ANSI string.\nfunc lenAnsi(s string) int {\n\tlength := 0\n\tansiCode := false\n\tfor _, r := range s {\n\t\tif r == '\\x1b' {\n\t\t\tansiCode = true\n\t\t\tcontinue\n\t\t}\n\t\tif ansiCode && r == 'm' {\n\t\t\tansiCode = false\n\t\t\tcontinue\n\t\t}\n\t\tif !ansiCode {\n\t\t\tlength++\n\t\t}\n\t}\n\treturn length\n}\n\nvar percentChars = strings.Split(\"⠀⡀⣀⣄⣤⣦⣶⣷⣿\", \"\")\n"
  },
  {
    "path": "cmd/display/tty_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage display\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\t\"unicode/utf8\"\n\n\t\"gotest.tools/v3/assert\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc newTestWriter() (*ttyWriter, *bytes.Buffer) {\n\tvar buf bytes.Buffer\n\tw := &ttyWriter{\n\t\tout:       &buf,\n\t\tinfo:      &buf,\n\t\ttasks:     map[string]*task{},\n\t\tdone:      make(chan bool),\n\t\tmtx:       &sync.Mutex{},\n\t\toperation: \"pull\",\n\t}\n\treturn w, &buf\n}\n\nfunc addTask(w *ttyWriter, id, text, details string, status api.EventStatus) {\n\tt := &task{\n\t\tID:        id,\n\t\tparents:   make(map[string]struct{}),\n\t\tstartTime: time.Now(),\n\t\ttext:      text,\n\t\tdetails:   details,\n\t\tstatus:    status,\n\t\tspinner:   NewSpinner(),\n\t}\n\tw.tasks[id] = t\n\tw.ids = append(w.ids, id)\n}\n\n// extractLines parses the output buffer and returns lines without ANSI control sequences\nfunc extractLines(buf *bytes.Buffer) []string {\n\tcontent := buf.String()\n\t// Split by newline\n\trawLines := strings.Split(content, \"\\n\")\n\tvar lines []string\n\tfor _, line := range rawLines {\n\t\t// Skip empty lines and lines that are just ANSI codes\n\t\tif lenAnsi(line) > 0 {\n\t\t\tlines = append(lines, line)\n\t\t}\n\t}\n\treturn lines\n}\n\nfunc TestPrintWithDimensions_LinesFitTerminalWidth(t *testing.T) {\n\ttestCases := []struct {\n\t\tname          string\n\t\ttaskID        string\n\t\tstatus        string\n\t\tdetails       string\n\t\tterminalWidth int\n\t}{\n\t\t{\n\t\t\tname:          \"short task fits wide terminal\",\n\t\t\ttaskID:        \"Image foo\",\n\t\t\tstatus:        \"Pulling\",\n\t\t\tdetails:       \"layer abc123\",\n\t\t\tterminalWidth: 100,\n\t\t},\n\t\t{\n\t\t\tname:          \"long details truncated to fit\",\n\t\t\ttaskID:        \"Image foo\",\n\t\t\tstatus:        \"Pulling\",\n\t\t\tdetails:       \"downloading layer sha256:abc123def456789xyz0123456789abcdef\",\n\t\t\tterminalWidth: 50,\n\t\t},\n\t\t{\n\t\t\tname:          \"long taskID truncated to fit\",\n\t\t\ttaskID:        \"very-long-image-name-that-exceeds-terminal-width\",\n\t\t\tstatus:        \"Pulling\",\n\t\t\tdetails:       \"\",\n\t\t\tterminalWidth: 40,\n\t\t},\n\t\t{\n\t\t\tname:          \"both long taskID and details\",\n\t\t\ttaskID:        \"my-very-long-service-name-here\",\n\t\t\tstatus:        \"Downloading\",\n\t\t\tdetails:       \"layer sha256:abc123def456789xyz0123456789\",\n\t\t\tterminalWidth: 50,\n\t\t},\n\t\t{\n\t\t\tname:          \"narrow terminal\",\n\t\t\ttaskID:        \"service-name\",\n\t\t\tstatus:        \"Pulling\",\n\t\t\tdetails:       \"some details\",\n\t\t\tterminalWidth: 35,\n\t\t},\n\t}\n\n\tfor _, tc := range testCases {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tw, buf := newTestWriter()\n\t\t\taddTask(w, tc.taskID, tc.status, tc.details, api.Working)\n\n\t\t\tw.printWithDimensions(tc.terminalWidth, 24)\n\n\t\t\tlines := extractLines(buf)\n\t\t\tfor i, line := range lines {\n\t\t\t\tlineLen := lenAnsi(line)\n\t\t\t\tassert.Assert(t, lineLen <= tc.terminalWidth,\n\t\t\t\t\t\"line %d has length %d which exceeds terminal width %d: %q\",\n\t\t\t\t\ti, lineLen, tc.terminalWidth, line)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestPrintWithDimensions_MultipleTasksFitTerminalWidth(t *testing.T) {\n\tw, buf := newTestWriter()\n\n\t// Add multiple tasks with varying lengths\n\taddTask(w, \"Image nginx\", \"Pulling\", \"layer sha256:abc123\", api.Working)\n\taddTask(w, \"Image postgres-database\", \"Pulling\", \"downloading\", api.Working)\n\taddTask(w, \"Image redis\", \"Pulled\", \"\", api.Done)\n\n\tterminalWidth := 60\n\tw.printWithDimensions(terminalWidth, 24)\n\n\tlines := extractLines(buf)\n\tfor i, line := range lines {\n\t\tlineLen := lenAnsi(line)\n\t\tassert.Assert(t, lineLen <= terminalWidth,\n\t\t\t\"line %d has length %d which exceeds terminal width %d: %q\",\n\t\t\ti, lineLen, terminalWidth, line)\n\t}\n}\n\nfunc TestPrintWithDimensions_VeryNarrowTerminal(t *testing.T) {\n\tw, buf := newTestWriter()\n\taddTask(w, \"Image nginx\", \"Pulling\", \"details\", api.Working)\n\n\tterminalWidth := 30\n\tw.printWithDimensions(terminalWidth, 24)\n\n\tlines := extractLines(buf)\n\tfor i, line := range lines {\n\t\tlineLen := lenAnsi(line)\n\t\tassert.Assert(t, lineLen <= terminalWidth,\n\t\t\t\"line %d has length %d which exceeds terminal width %d: %q\",\n\t\t\ti, lineLen, terminalWidth, line)\n\t}\n}\n\nfunc TestPrintWithDimensions_TaskWithProgress(t *testing.T) {\n\tw, buf := newTestWriter()\n\n\t// Create parent task\n\tparent := &task{\n\t\tID:        \"Image nginx\",\n\t\tparents:   make(map[string]struct{}),\n\t\tstartTime: time.Now(),\n\t\ttext:      \"Pulling\",\n\t\tstatus:    api.Working,\n\t\tspinner:   NewSpinner(),\n\t}\n\tw.tasks[\"Image nginx\"] = parent\n\tw.ids = append(w.ids, \"Image nginx\")\n\n\t// Create child tasks to trigger progress display\n\tfor i := range 3 {\n\t\tchild := &task{\n\t\t\tID:        \"layer\" + string(rune('a'+i)),\n\t\t\tparents:   map[string]struct{}{\"Image nginx\": {}},\n\t\t\tstartTime: time.Now(),\n\t\t\ttext:      \"Downloading\",\n\t\t\tstatus:    api.Working,\n\t\t\ttotal:     1000,\n\t\t\tcurrent:   500,\n\t\t\tpercent:   50,\n\t\t\tspinner:   NewSpinner(),\n\t\t}\n\t\tw.tasks[child.ID] = child\n\t\tw.ids = append(w.ids, child.ID)\n\t}\n\n\tterminalWidth := 80\n\tw.printWithDimensions(terminalWidth, 24)\n\n\tlines := extractLines(buf)\n\tfor i, line := range lines {\n\t\tlineLen := lenAnsi(line)\n\t\tassert.Assert(t, lineLen <= terminalWidth,\n\t\t\t\"line %d has length %d which exceeds terminal width %d: %q\",\n\t\t\ti, lineLen, terminalWidth, line)\n\t}\n}\n\nfunc TestAdjustLineWidth_DetailsCorrectlyTruncated(t *testing.T) {\n\tw := &ttyWriter{}\n\tlines := []lineData{\n\t\t{\n\t\t\ttaskID:  \"Image foo\",\n\t\t\tstatus:  \"Pulling\",\n\t\t\tdetails: \"downloading layer sha256:abc123def456789xyz\",\n\t\t},\n\t}\n\n\tterminalWidth := 50\n\ttimerLen := 5\n\tw.adjustLineWidth(lines, timerLen, terminalWidth)\n\n\t// Verify the line fits\n\tdetailsLen := len(lines[0].details)\n\tif detailsLen > 0 {\n\t\tdetailsLen++ // space before details\n\t}\n\t// widthWithoutDetails = 5 + prefix(0) + taskID(9) + progress(0) + status(7) + timer(5) = 26\n\tlineWidth := 5 + len(lines[0].taskID) + len(lines[0].status) + detailsLen + timerLen\n\n\tassert.Assert(t, lineWidth <= terminalWidth,\n\t\t\"line width %d should not exceed terminal width %d (taskID=%q, details=%q)\",\n\t\tlineWidth, terminalWidth, lines[0].taskID, lines[0].details)\n\n\t// Verify details were truncated (not removed entirely)\n\tassert.Assert(t, lines[0].details != \"\", \"details should be truncated, not removed\")\n\tassert.Assert(t, strings.HasSuffix(lines[0].details, \"...\"), \"truncated details should end with ...\")\n}\n\nfunc TestAdjustLineWidth_TaskIDCorrectlyTruncated(t *testing.T) {\n\tw := &ttyWriter{}\n\tlines := []lineData{\n\t\t{\n\t\t\ttaskID:  \"very-long-image-name-that-exceeds-minimum-length\",\n\t\t\tstatus:  \"Pulling\",\n\t\t\tdetails: \"\",\n\t\t},\n\t}\n\n\tterminalWidth := 40\n\ttimerLen := 5\n\tw.adjustLineWidth(lines, timerLen, terminalWidth)\n\n\tlineWidth := 5 + len(lines[0].taskID) + 7 + timerLen\n\n\tassert.Assert(t, lineWidth <= terminalWidth,\n\t\t\"line width %d should not exceed terminal width %d (taskID=%q)\",\n\t\tlineWidth, terminalWidth, lines[0].taskID)\n\n\tassert.Assert(t, strings.HasSuffix(lines[0].taskID, \"...\"), \"truncated taskID should end with ...\")\n}\n\nfunc TestAdjustLineWidth_NoTruncationNeeded(t *testing.T) {\n\tw := &ttyWriter{}\n\toriginalDetails := \"short\"\n\toriginalTaskID := \"Image foo\"\n\tlines := []lineData{\n\t\t{\n\t\t\ttaskID:  originalTaskID,\n\t\t\tstatus:  \"Pulling\",\n\t\t\tdetails: originalDetails,\n\t\t},\n\t}\n\n\t// Wide terminal, nothing should be truncated\n\tw.adjustLineWidth(lines, 5, 100)\n\n\tassert.Equal(t, originalTaskID, lines[0].taskID, \"taskID should not be modified\")\n\tassert.Equal(t, originalDetails, lines[0].details, \"details should not be modified\")\n}\n\nfunc TestAdjustLineWidth_DetailsRemovedWhenTooShort(t *testing.T) {\n\tw := &ttyWriter{}\n\tlines := []lineData{\n\t\t{\n\t\t\ttaskID:  \"Image foo\",\n\t\t\tstatus:  \"Pulling\",\n\t\t\tdetails: \"abc\", // Very short, can't be meaningfully truncated\n\t\t},\n\t}\n\n\t// Terminal so narrow that even minimal details + \"...\" wouldn't help\n\tw.adjustLineWidth(lines, 5, 28)\n\n\tassert.Equal(t, \"\", lines[0].details, \"details should be removed entirely when too short to truncate\")\n}\n\n// stripAnsi removes ANSI escape codes from a string\nfunc stripAnsi(s string) string {\n\tvar result strings.Builder\n\tinAnsi := false\n\tfor _, r := range s {\n\t\tif r == '\\x1b' {\n\t\t\tinAnsi = true\n\t\t\tcontinue\n\t\t}\n\t\tif inAnsi {\n\t\t\t// ANSI sequences end with a letter (m, h, l, G, etc.)\n\t\t\tif (r >= 'A' && r <= 'Z') || (r >= 'a' && r <= 'z') {\n\t\t\t\tinAnsi = false\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tresult.WriteRune(r)\n\t}\n\treturn result.String()\n}\n\nfunc TestPrintWithDimensions_PulledAndPullingWithLongIDs(t *testing.T) {\n\tw, buf := newTestWriter()\n\n\t// Add a completed task with long ID\n\tcompletedTask := &task{\n\t\tID:        \"Image docker.io/library/nginx-long-name\",\n\t\tparents:   make(map[string]struct{}),\n\t\tstartTime: time.Now().Add(-2 * time.Second),\n\t\tendTime:   time.Now(),\n\t\ttext:      \"Pulled\",\n\t\tstatus:    api.Done,\n\t\tspinner:   NewSpinner(),\n\t}\n\tcompletedTask.spinner.Stop()\n\tw.tasks[completedTask.ID] = completedTask\n\tw.ids = append(w.ids, completedTask.ID)\n\n\t// Add a pending task with long ID\n\tpendingTask := &task{\n\t\tID:        \"Image docker.io/library/postgres-database\",\n\t\tparents:   make(map[string]struct{}),\n\t\tstartTime: time.Now(),\n\t\ttext:      \"Pulling\",\n\t\tstatus:    api.Working,\n\t\tspinner:   NewSpinner(),\n\t}\n\tw.tasks[pendingTask.ID] = pendingTask\n\tw.ids = append(w.ids, pendingTask.ID)\n\n\tterminalWidth := 50\n\tw.printWithDimensions(terminalWidth, 24)\n\n\t// Strip all ANSI codes from output and split by newline\n\tstripped := stripAnsi(buf.String())\n\tlines := strings.Split(stripped, \"\\n\")\n\n\t// Filter non-empty lines\n\tvar nonEmptyLines []string\n\tfor _, line := range lines {\n\t\tif strings.TrimSpace(line) != \"\" {\n\t\t\tnonEmptyLines = append(nonEmptyLines, line)\n\t\t}\n\t}\n\n\t// Expected output format (50 runes per task line)\n\texpected := `[+] pull 1/2\n ✔ Image docker.io/library/nginx-l... Pulled  2.0s\n ⠋ Image docker.io/library/postgre... Pulling 0.0s`\n\n\texpectedLines := strings.Split(expected, \"\\n\")\n\n\t// Debug output\n\tt.Logf(\"Actual output:\\n\")\n\tfor i, line := range nonEmptyLines {\n\t\tt.Logf(\"  line %d (%2d runes): %q\", i, utf8.RuneCountInString(line), line)\n\t}\n\n\t// Verify number of lines\n\tassert.Equal(t, len(expectedLines), len(nonEmptyLines), \"number of lines should match\")\n\n\t// Verify each line matches expected\n\tfor i, line := range nonEmptyLines {\n\t\tif i < len(expectedLines) {\n\t\t\tassert.Equal(t, expectedLines[i], line,\n\t\t\t\t\"line %d should match expected\", i)\n\t\t}\n\t}\n\n\t// Verify task lines fit within terminal width (strict - no tolerance)\n\tfor i, line := range nonEmptyLines {\n\t\tif i > 0 { // Skip header line\n\t\t\truneCount := utf8.RuneCountInString(line)\n\t\t\tassert.Assert(t, runeCount <= terminalWidth,\n\t\t\t\t\"line %d has %d runes which exceeds terminal width %d: %q\",\n\t\t\t\ti, runeCount, terminalWidth, line)\n\t\t}\n\t}\n}\n\nfunc TestLenAnsi(t *testing.T) {\n\ttestCases := []struct {\n\t\tinput    string\n\t\texpected int\n\t}{\n\t\t{\"hello\", 5},\n\t\t{\"\\x1b[32mhello\\x1b[0m\", 5},\n\t\t{\"\\x1b[1;32mgreen\\x1b[0m text\", 10},\n\t\t{\"\", 0},\n\t\t{\"\\x1b[0m\", 0},\n\t}\n\n\tfor _, tc := range testCases {\n\t\tt.Run(tc.input, func(t *testing.T) {\n\t\t\tresult := lenAnsi(tc.input)\n\t\t\tassert.Equal(t, tc.expected, result)\n\t\t})\n\t}\n}\n\nfunc TestDoneDeadlockFix(t *testing.T) {\n\tw, _ := newTestWriter()\n\taddTask(w, \"test-task\", \"Working\", \"details\", api.Working)\n\tctx, cancel := context.WithCancel(t.Context())\n\tdefer cancel()\n\n\tw.Start(ctx, \"test\")\n\tdone := make(chan bool)\n\tgo func() {\n\t\tw.Done(\"test\", true)\n\t\tdone <- true\n\t}()\n\n\tselect {\n\tcase <-done:\n\tcase <-time.After(5 * time.Second):\n\t\tt.Fatal(\"Deadlock detected: Done() did not complete within 5 seconds\")\n\t}\n}\n"
  },
  {
    "path": "cmd/formatter/ansi.go",
    "content": "/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage formatter\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/acarl005/stripansi\"\n\t\"github.com/morikuni/aec\"\n)\n\nvar disableAnsi bool\n\nfunc saveCursor() {\n\tif disableAnsi {\n\t\treturn\n\t}\n\t// see https://github.com/morikuni/aec/pull/5\n\tfmt.Print(aec.Save)\n}\n\nfunc restoreCursor() {\n\tif disableAnsi {\n\t\treturn\n\t}\n\t// see https://github.com/morikuni/aec/pull/5\n\tfmt.Print(aec.Restore)\n}\n\nfunc showCursor() {\n\tif disableAnsi {\n\t\treturn\n\t}\n\tfmt.Print(aec.Show)\n}\n\nfunc moveCursor(y, x int) {\n\tif disableAnsi {\n\t\treturn\n\t}\n\tfmt.Print(aec.Position(uint(y), uint(x)))\n}\n\nfunc carriageReturn() {\n\tif disableAnsi {\n\t\treturn\n\t}\n\tfmt.Print(aec.Column(0))\n}\n\nfunc clearLine() {\n\tif disableAnsi {\n\t\treturn\n\t}\n\t// Does not move cursor from its current position\n\tfmt.Print(aec.EraseLine(aec.EraseModes.Tail))\n}\n\nfunc moveCursorUp(lines int) {\n\tif disableAnsi {\n\t\treturn\n\t}\n\t// Does not add new lines\n\tfmt.Print(aec.Up(uint(lines)))\n}\n\nfunc moveCursorDown(lines int) {\n\tif disableAnsi {\n\t\treturn\n\t}\n\t// Does not add new lines\n\tfmt.Print(aec.Down(uint(lines)))\n}\n\nfunc newLine() {\n\t// Like \\n\n\tfmt.Print(\"\\012\")\n}\n\nfunc lenAnsi(s string) int {\n\t// len has into consideration ansi codes, if we want\n\t// the len of the actual len(string) we need to strip\n\t// all ansi codes\n\treturn len(stripansi.Strip(s))\n}\n"
  },
  {
    "path": "cmd/formatter/colors.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage formatter\n\nimport (\n\t\"fmt\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"github.com/docker/cli/cli/command\"\n)\n\nvar names = []string{\n\t\"grey\",\n\t\"red\",\n\t\"green\",\n\t\"yellow\",\n\t\"blue\",\n\t\"magenta\",\n\t\"cyan\",\n\t\"white\",\n}\n\nconst (\n\tBOLD      = \"1\"\n\tFAINT     = \"2\"\n\tITALIC    = \"3\"\n\tUNDERLINE = \"4\"\n)\n\nconst (\n\tRESET = \"0\"\n\tCYAN  = \"36\"\n)\n\nconst (\n\t// Never use ANSI codes\n\tNever = \"never\"\n\n\t// Always use ANSI codes\n\tAlways = \"always\"\n\n\t// Auto detect terminal is a tty and can use ANSI codes\n\tAuto = \"auto\"\n)\n\n// ansiColorOffset is the offset for basic foreground colors in ANSI escape codes.\nconst ansiColorOffset = 30\n\n// SetANSIMode configure formatter for colored output on ANSI-compliant console\nfunc SetANSIMode(streams command.Streams, ansi string) {\n\tif !useAnsi(streams, ansi) {\n\t\tnextColor = func() colorFunc {\n\t\t\treturn monochrome\n\t\t}\n\t\tdisableAnsi = true\n\t}\n}\n\nfunc useAnsi(streams command.Streams, ansi string) bool {\n\tswitch ansi {\n\tcase Always:\n\t\treturn true\n\tcase Auto:\n\t\treturn streams.Out().IsTerminal()\n\t}\n\treturn false\n}\n\n// colorFunc use ANSI codes to render colored text on console\ntype colorFunc func(s string) string\n\nvar monochrome = func(s string) string {\n\treturn s\n}\n\nfunc ansiColor(code, s string, formatOpts ...string) string {\n\treturn fmt.Sprintf(\"%s%s%s\", ansiColorCode(code, formatOpts...), s, ansiColorCode(\"0\"))\n}\n\n// Everything about ansiColorCode color https://hyperskill.org/learn/step/18193\nfunc ansiColorCode(code string, formatOpts ...string) string {\n\tvar sb strings.Builder\n\tsb.WriteString(\"\\033[\")\n\tfor _, c := range formatOpts {\n\t\tsb.WriteString(c)\n\t\tsb.WriteString(\";\")\n\t}\n\tsb.WriteString(code)\n\tsb.WriteString(\"m\")\n\treturn sb.String()\n}\n\nfunc makeColorFunc(code string) colorFunc {\n\treturn func(s string) string {\n\t\treturn ansiColor(code, s)\n\t}\n}\n\nvar (\n\tnextColor    = rainbowColor\n\trainbow      []colorFunc\n\tcurrentIndex = 0\n\tmutex        sync.Mutex\n)\n\nfunc rainbowColor() colorFunc {\n\tmutex.Lock()\n\tdefer mutex.Unlock()\n\tresult := rainbow[currentIndex]\n\tcurrentIndex = (currentIndex + 1) % len(rainbow)\n\treturn result\n}\n\nfunc init() {\n\tcolors := map[string]colorFunc{}\n\tfor i, name := range names {\n\t\tcolors[name] = makeColorFunc(strconv.Itoa(ansiColorOffset + i))\n\t\tcolors[\"intense_\"+name] = makeColorFunc(strconv.Itoa(ansiColorOffset+i) + \";1\")\n\t}\n\trainbow = []colorFunc{\n\t\tcolors[\"cyan\"],\n\t\tcolors[\"yellow\"],\n\t\tcolors[\"green\"],\n\t\tcolors[\"magenta\"],\n\t\tcolors[\"blue\"],\n\t\tcolors[\"intense_cyan\"],\n\t\tcolors[\"intense_yellow\"],\n\t\tcolors[\"intense_green\"],\n\t\tcolors[\"intense_magenta\"],\n\t\tcolors[\"intense_blue\"],\n\t}\n}\n"
  },
  {
    "path": "cmd/formatter/consts.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage formatter\n\nconst (\n\t// JSON Print in JSON format\n\tJSON = \"json\"\n\t// TemplateLegacyJSON the legacy json formatting value using go template\n\tTemplateLegacyJSON = \"{{json.}}\"\n\t// PRETTY is the constant for default formats on list commands\n\t//\n\t// Deprecated: use TABLE\n\tPRETTY = \"pretty\"\n\t// TABLE Print output in table format with column headers (default)\n\tTABLE = \"table\"\n)\n"
  },
  {
    "path": "cmd/formatter/container.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage formatter\n\nimport (\n\t\"fmt\"\n\t\"net/netip\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/docker/cli/cli/command/formatter\"\n\t\"github.com/docker/go-units\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client/pkg/stringid\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nconst (\n\tdefaultContainerTableFormat = \"table {{.Name}}\\t{{.Image}}\\t{{.Command}}\\t{{.Service}}\\t{{.RunningFor}}\\t{{.Status}}\\t{{.Ports}}\"\n\n\tnameHeader       = \"NAME\"\n\tprojectHeader    = \"PROJECT\"\n\tserviceHeader    = \"SERVICE\"\n\tcommandHeader    = \"COMMAND\"\n\trunningForHeader = \"CREATED\"\n\tmountsHeader     = \"MOUNTS\"\n\tlocalVolumes     = \"LOCAL VOLUMES\"\n\tnetworksHeader   = \"NETWORKS\"\n)\n\n// NewContainerFormat returns a Format for rendering using a Context\nfunc NewContainerFormat(source string, quiet bool, size bool) formatter.Format {\n\tswitch source {\n\tcase formatter.TableFormatKey, \"\": // table formatting is the default if none is set.\n\t\tif quiet {\n\t\t\treturn formatter.DefaultQuietFormat\n\t\t}\n\t\tformat := defaultContainerTableFormat\n\t\tif size {\n\t\t\tformat += `\\t{{.Size}}`\n\t\t}\n\t\treturn formatter.Format(format)\n\tcase formatter.RawFormatKey:\n\t\tif quiet {\n\t\t\treturn `container_id: {{.ID}}`\n\t\t}\n\t\tformat := `container_id: {{.ID}}\nimage: {{.Image}}\ncommand: {{.Command}}\ncreated_at: {{.CreatedAt}}\nstate: {{- pad .State 1 0}}\nstatus: {{- pad .Status 1 0}}\nnames: {{.Names}}\nlabels: {{- pad .Labels 1 0}}\nports: {{- pad .Ports 1 0}}\n`\n\t\tif size {\n\t\t\tformat += `size: {{.Size}}\\n`\n\t\t}\n\t\treturn formatter.Format(format)\n\tdefault: // custom format\n\t\tif quiet {\n\t\t\treturn formatter.DefaultQuietFormat\n\t\t}\n\t\treturn formatter.Format(source)\n\t}\n}\n\n// ContainerWrite renders the context for a list of containers\nfunc ContainerWrite(ctx formatter.Context, containers []api.ContainerSummary) error {\n\trender := func(format func(subContext formatter.SubContext) error) error {\n\t\tfor _, container := range containers {\n\t\t\terr := format(&ContainerContext{trunc: ctx.Trunc, c: container})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\treturn nil\n\t}\n\treturn ctx.Write(NewContainerContext(), render)\n}\n\n// ContainerContext is a struct used for rendering a list of containers in a Go template.\ntype ContainerContext struct {\n\tformatter.HeaderContext\n\ttrunc bool\n\tc     api.ContainerSummary\n\n\t// FieldsUsed is used in the pre-processing step to detect which fields are\n\t// used in the template. It's currently only used to detect use of the .Size\n\t// field which (if used) automatically sets the '--size' option when making\n\t// the API call.\n\tFieldsUsed map[string]any\n}\n\n// NewContainerContext creates a new context for rendering containers\nfunc NewContainerContext() *ContainerContext {\n\tcontainerCtx := ContainerContext{}\n\tcontainerCtx.Header = formatter.SubHeaderContext{\n\t\t\"ID\":         formatter.ContainerIDHeader,\n\t\t\"Name\":       nameHeader,\n\t\t\"Project\":    projectHeader,\n\t\t\"Service\":    serviceHeader,\n\t\t\"Image\":      formatter.ImageHeader,\n\t\t\"Command\":    commandHeader,\n\t\t\"CreatedAt\":  formatter.CreatedAtHeader,\n\t\t\"RunningFor\": runningForHeader,\n\t\t\"Ports\":      formatter.PortsHeader,\n\t\t\"State\":      formatter.StateHeader,\n\t\t\"Status\":     formatter.StatusHeader,\n\t\t\"Size\":       formatter.SizeHeader,\n\t\t\"Labels\":     formatter.LabelsHeader,\n\t}\n\treturn &containerCtx\n}\n\n// MarshalJSON makes ContainerContext implement json.Marshaler\nfunc (c *ContainerContext) MarshalJSON() ([]byte, error) {\n\treturn formatter.MarshalJSON(c)\n}\n\n// ID returns the container's ID as a string. Depending on the `--no-trunc`\n// option being set, the full or truncated ID is returned.\nfunc (c *ContainerContext) ID() string {\n\tif c.trunc {\n\t\treturn stringid.TruncateID(c.c.ID)\n\t}\n\treturn c.c.ID\n}\n\nfunc (c *ContainerContext) Name() string {\n\treturn c.c.Name\n}\n\n// Names returns a comma-separated string of the container's names, with their\n// slash (/) prefix stripped. Additional names for the container (related to the\n// legacy `--link` feature) are omitted.\nfunc (c *ContainerContext) Names() string {\n\tnames := formatter.StripNamePrefix(c.c.Names)\n\tif c.trunc {\n\t\tfor _, name := range names {\n\t\t\tif len(strings.Split(name, \"/\")) == 1 {\n\t\t\t\tnames = []string{name}\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\treturn strings.Join(names, \",\")\n}\n\nfunc (c *ContainerContext) Service() string {\n\treturn c.c.Service\n}\n\nfunc (c *ContainerContext) Project() string {\n\treturn c.c.Project\n}\n\nfunc (c *ContainerContext) Image() string {\n\treturn c.c.Image\n}\n\nfunc (c *ContainerContext) Command() string {\n\tcommand := c.c.Command\n\tif c.trunc {\n\t\tcommand = formatter.Ellipsis(command, 20)\n\t}\n\treturn strconv.Quote(command)\n}\n\nfunc (c *ContainerContext) CreatedAt() string {\n\treturn time.Unix(c.c.Created, 0).String()\n}\n\nfunc (c *ContainerContext) RunningFor() string {\n\tcreatedAt := time.Unix(c.c.Created, 0)\n\treturn units.HumanDuration(time.Now().UTC().Sub(createdAt)) + \" ago\"\n}\n\nfunc (c *ContainerContext) ExitCode() int {\n\treturn c.c.ExitCode\n}\n\nfunc (c *ContainerContext) State() string {\n\treturn string(c.c.State)\n}\n\nfunc (c *ContainerContext) Status() string {\n\treturn c.c.Status\n}\n\nfunc (c *ContainerContext) Health() string {\n\treturn string(c.c.Health)\n}\n\nfunc (c *ContainerContext) Publishers() api.PortPublishers {\n\treturn c.c.Publishers\n}\n\nfunc (c *ContainerContext) Ports() string {\n\tvar ports []container.PortSummary\n\tfor _, publisher := range c.c.Publishers {\n\t\tvar pIP netip.Addr\n\t\tif publisher.URL != \"\" {\n\t\t\tif p, err := netip.ParseAddr(publisher.URL); err == nil {\n\t\t\t\tpIP = p\n\t\t\t}\n\t\t}\n\t\tports = append(ports, container.PortSummary{\n\t\t\tIP:          pIP,\n\t\t\tPrivatePort: uint16(publisher.TargetPort),\n\t\t\tPublicPort:  uint16(publisher.PublishedPort),\n\t\t\tType:        publisher.Protocol,\n\t\t})\n\t}\n\treturn formatter.DisplayablePorts(ports)\n}\n\n// Labels returns a comma-separated string of labels present on the container.\nfunc (c *ContainerContext) Labels() string {\n\tif c.c.Labels == nil {\n\t\treturn \"\"\n\t}\n\n\tvar joinLabels []string\n\tfor k, v := range c.c.Labels {\n\t\tjoinLabels = append(joinLabels, fmt.Sprintf(\"%s=%s\", k, v))\n\t}\n\treturn strings.Join(joinLabels, \",\")\n}\n\n// Label returns the value of the label with the given name or an empty string\n// if the given label does not exist.\nfunc (c *ContainerContext) Label(name string) string {\n\tif c.c.Labels == nil {\n\t\treturn \"\"\n\t}\n\treturn c.c.Labels[name]\n}\n\n// Mounts returns a comma-separated string of mount names present on the container.\n// If the trunc option is set, names can be truncated (ellipsized).\nfunc (c *ContainerContext) Mounts() string {\n\tvar mounts []string\n\tfor _, name := range c.c.Mounts {\n\t\tif c.trunc {\n\t\t\tname = formatter.Ellipsis(name, 15)\n\t\t}\n\t\tmounts = append(mounts, name)\n\t}\n\treturn strings.Join(mounts, \",\")\n}\n\n// LocalVolumes returns the number of volumes using the \"local\" volume driver.\nfunc (c *ContainerContext) LocalVolumes() string {\n\treturn fmt.Sprintf(\"%d\", c.c.LocalVolumes)\n}\n\n// Networks returns a comma-separated string of networks that the container is\n// attached to.\nfunc (c *ContainerContext) Networks() string {\n\treturn strings.Join(c.c.Networks, \",\")\n}\n\n// Size returns the container's size and virtual size (e.g. \"2B (virtual 21.5MB)\")\nfunc (c *ContainerContext) Size() string {\n\tif c.FieldsUsed == nil {\n\t\tc.FieldsUsed = map[string]any{}\n\t}\n\tc.FieldsUsed[\"Size\"] = struct{}{}\n\tsrw := units.HumanSizeWithPrecision(float64(c.c.SizeRw), 3)\n\tsv := units.HumanSizeWithPrecision(float64(c.c.SizeRootFs), 3)\n\n\tsf := srw\n\tif c.c.SizeRootFs > 0 {\n\t\tsf = fmt.Sprintf(\"%s (virtual %s)\", srw, sv)\n\t}\n\treturn sf\n}\n"
  },
  {
    "path": "cmd/formatter/formatter.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage formatter\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"reflect\"\n\t\"strings\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\n// Print prints formatted lists in different formats\nfunc Print(toJSON any, format string, outWriter io.Writer, writerFn func(w io.Writer), headers ...string) error {\n\tswitch strings.ToLower(format) {\n\tcase TABLE, PRETTY, \"\":\n\t\treturn PrintPrettySection(outWriter, writerFn, headers...)\n\tcase TemplateLegacyJSON:\n\t\tswitch reflect.TypeOf(toJSON).Kind() {\n\t\tcase reflect.Slice:\n\t\t\ts := reflect.ValueOf(toJSON)\n\t\t\tfor i := 0; i < s.Len(); i++ {\n\t\t\t\tobj := s.Index(i).Interface()\n\t\t\t\toutJSON, err := ToJSON(obj, \"\", \"\")\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\t_, _ = fmt.Fprint(outWriter, outJSON)\n\t\t\t}\n\t\tdefault:\n\t\t\toutJSON, err := ToStandardJSON(toJSON)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\t_, _ = fmt.Fprintln(outWriter, outJSON)\n\t\t}\n\tcase JSON:\n\t\tswitch reflect.TypeOf(toJSON).Kind() {\n\t\tcase reflect.Slice:\n\t\t\toutJSON, err := ToJSON(toJSON, \"\", \"\")\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\t_, _ = fmt.Fprint(outWriter, outJSON)\n\t\tdefault:\n\t\t\toutJSON, err := ToStandardJSON(toJSON)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\t_, _ = fmt.Fprintln(outWriter, outJSON)\n\t\t}\n\tdefault:\n\t\treturn fmt.Errorf(\"format value %q could not be parsed: %w\", format, api.ErrParsingFailed)\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "cmd/formatter/formatter_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage formatter\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"testing\"\n\n\t\"go.uber.org/goleak\"\n\t\"gotest.tools/v3/assert\"\n)\n\ntype testStruct struct {\n\tName   string\n\tStatus string\n}\n\n// Print prints formatted lists in different formats\nfunc TestPrint(t *testing.T) {\n\ttestList := []testStruct{\n\t\t{\n\t\t\tName:   \"myName1\",\n\t\t\tStatus: \"myStatus1\",\n\t\t},\n\t\t{\n\t\t\tName:   \"myName2\",\n\t\t\tStatus: \"myStatus2\",\n\t\t},\n\t}\n\n\tb := &bytes.Buffer{}\n\tassert.NilError(t, Print(testList, TABLE, b, func(w io.Writer) {\n\t\tfor _, t := range testList {\n\t\t\t_, _ = fmt.Fprintf(w, \"%s\\t%s\\n\", t.Name, t.Status)\n\t\t}\n\t}, \"NAME\", \"STATUS\"))\n\tassert.Equal(t, b.String(), \"NAME                STATUS\\nmyName1             myStatus1\\nmyName2             myStatus2\\n\")\n\n\tb.Reset()\n\tassert.NilError(t, Print(testList, JSON, b, func(w io.Writer) {\n\t\tfor _, t := range testList {\n\t\t\t_, _ = fmt.Fprintf(w, \"%s\\t%s\\n\", t.Name, t.Status)\n\t\t}\n\t}, \"NAME\", \"STATUS\"))\n\tassert.Equal(t, b.String(), `[{\"Name\":\"myName1\",\"Status\":\"myStatus1\"},{\"Name\":\"myName2\",\"Status\":\"myStatus2\"}]\n`)\n\n\tb.Reset()\n\tassert.NilError(t, Print(testList, TemplateLegacyJSON, b, func(w io.Writer) {\n\t\tfor _, t := range testList {\n\t\t\t_, _ = fmt.Fprintf(w, \"%s\\t%s\\n\", t.Name, t.Status)\n\t\t}\n\t}, \"NAME\", \"STATUS\"))\n\tjson := b.String()\n\tassert.Equal(t, json, `{\"Name\":\"myName1\",\"Status\":\"myStatus1\"}\n{\"Name\":\"myName2\",\"Status\":\"myStatus2\"}\n`)\n}\n\nfunc TestColorsGoroutinesLeak(t *testing.T) {\n\tgoleak.VerifyNone(t)\n}\n"
  },
  {
    "path": "cmd/formatter/json.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage formatter\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n)\n\nconst standardIndentation = \"    \"\n\n// ToStandardJSON return a string with the JSON representation of the interface{}\nfunc ToStandardJSON(i any) (string, error) {\n\treturn ToJSON(i, \"\", standardIndentation)\n}\n\n// ToJSON return a string with the JSON representation of the interface{}\nfunc ToJSON(i any, prefix string, indentation string) (string, error) {\n\tbuffer := &bytes.Buffer{}\n\tencoder := json.NewEncoder(buffer)\n\tencoder.SetEscapeHTML(false)\n\tencoder.SetIndent(prefix, indentation)\n\terr := encoder.Encode(i)\n\treturn buffer.String(), err\n}\n"
  },
  {
    "path": "cmd/formatter/logs.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage formatter\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/buger/goterm\"\n\t\"github.com/moby/moby/client/pkg/jsonmessage\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\n// LogConsumer consume logs from services and format them\ntype logConsumer struct {\n\tctx        context.Context\n\tpresenters sync.Map // map[string]*presenter\n\twidth      int\n\tstdout     io.Writer\n\tstderr     io.Writer\n\tcolor      bool\n\tprefix     bool\n\ttimestamp  bool\n}\n\n// NewLogConsumer creates a new LogConsumer\nfunc NewLogConsumer(ctx context.Context, stdout, stderr io.Writer, color, prefix, timestamp bool) api.LogConsumer {\n\treturn &logConsumer{\n\t\tctx:        ctx,\n\t\tpresenters: sync.Map{},\n\t\twidth:      0,\n\t\tstdout:     stdout,\n\t\tstderr:     stderr,\n\t\tcolor:      color,\n\t\tprefix:     prefix,\n\t\ttimestamp:  timestamp,\n\t}\n}\n\nfunc (l *logConsumer) register(name string) *presenter {\n\tvar p *presenter\n\troot, _, found := strings.Cut(name, \" \")\n\tif found {\n\t\tparent := l.getPresenter(root)\n\t\tp = &presenter{\n\t\t\tcolors: parent.colors,\n\t\t\tname:   name,\n\t\t\tprefix: parent.prefix,\n\t\t}\n\t} else {\n\t\tcf := monochrome\n\t\tif l.color {\n\t\t\tswitch name {\n\t\t\tcase \"\":\n\t\t\t\tcf = monochrome\n\t\t\tcase api.WatchLogger:\n\t\t\t\tcf = makeColorFunc(\"92\")\n\t\t\tdefault:\n\t\t\t\tcf = nextColor()\n\t\t\t}\n\t\t}\n\t\tp = &presenter{\n\t\t\tcolors: cf,\n\t\t\tname:   name,\n\t\t}\n\t}\n\tl.presenters.Store(name, p)\n\tl.computeWidth()\n\tif l.prefix {\n\t\tl.presenters.Range(func(key, value any) bool {\n\t\t\tp := value.(*presenter)\n\t\t\tp.setPrefix(l.width)\n\t\t\treturn true\n\t\t})\n\t}\n\treturn p\n}\n\nfunc (l *logConsumer) getPresenter(container string) *presenter {\n\tp, ok := l.presenters.Load(container)\n\tif !ok { // should have been registered, but ¯\\_(ツ)_/¯\n\t\treturn l.register(container)\n\t}\n\treturn p.(*presenter)\n}\n\n// Log formats a log message as received from name/container\nfunc (l *logConsumer) Log(container, message string) {\n\tl.write(l.stdout, container, message)\n}\n\n// Err formats a log message as received from name/container\nfunc (l *logConsumer) Err(container, message string) {\n\tl.write(l.stderr, container, message)\n}\n\nfunc (l *logConsumer) write(w io.Writer, container, message string) {\n\tif l.ctx.Err() != nil {\n\t\treturn\n\t}\n\tp := l.getPresenter(container)\n\ttimestamp := time.Now().Format(jsonmessage.RFC3339NanoFixed)\n\tfor line := range strings.SplitSeq(message, \"\\n\") {\n\t\tif l.timestamp {\n\t\t\t_, _ = fmt.Fprintf(w, \"%s%s %s\\n\", p.prefix, timestamp, line)\n\t\t} else {\n\t\t\t_, _ = fmt.Fprintf(w, \"%s%s\\n\", p.prefix, line)\n\t\t}\n\t}\n}\n\nfunc (l *logConsumer) Status(container, msg string) {\n\tp := l.getPresenter(container)\n\ts := p.colors(fmt.Sprintf(\"%s%s %s\\n\", goterm.RESET_LINE, container, msg))\n\tl.stdout.Write([]byte(s)) //nolint:errcheck\n}\n\nfunc (l *logConsumer) computeWidth() {\n\twidth := 0\n\tl.presenters.Range(func(key, value any) bool {\n\t\tp := value.(*presenter)\n\t\tif len(p.name) > width {\n\t\t\twidth = len(p.name)\n\t\t}\n\t\treturn true\n\t})\n\tl.width = width + 1\n}\n\ntype presenter struct {\n\tcolors colorFunc\n\tname   string\n\tprefix string\n}\n\nfunc (p *presenter) setPrefix(width int) {\n\tif p.name == api.WatchLogger {\n\t\tp.prefix = p.colors(strings.Repeat(\" \", width) + \" ⦿ \")\n\t\treturn\n\t}\n\tp.prefix = p.colors(fmt.Sprintf(\"%-\"+strconv.Itoa(width)+\"s | \", p.name))\n}\n\ntype logDecorator struct {\n\tdecorated api.LogConsumer\n\tBefore    func()\n\tAfter     func()\n}\n\nfunc (l logDecorator) Log(containerName, message string) {\n\tl.Before()\n\tl.decorated.Log(containerName, message)\n\tl.After()\n}\n\nfunc (l logDecorator) Err(containerName, message string) {\n\tl.Before()\n\tl.decorated.Err(containerName, message)\n\tl.After()\n}\n\nfunc (l logDecorator) Status(container, msg string) {\n\tl.Before()\n\tl.decorated.Status(container, msg)\n\tl.After()\n}\n"
  },
  {
    "path": "cmd/formatter/pretty.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage formatter\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"strings\"\n\t\"text/tabwriter\"\n)\n\n// PrintPrettySection prints a tabbed section on the writer parameter\nfunc PrintPrettySection(out io.Writer, printer func(writer io.Writer), headers ...string) error {\n\tw := tabwriter.NewWriter(out, 20, 1, 3, ' ', 0)\n\t_, _ = fmt.Fprintln(w, strings.Join(headers, \"\\t\"))\n\tprinter(w)\n\treturn w.Flush()\n}\n"
  },
  {
    "path": "cmd/formatter/shortcut.go",
    "content": "/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage formatter\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"math\"\n\t\"os\"\n\t\"strings\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"github.com/buger/goterm\"\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/eiannone/keyboard\"\n\t\"github.com/skratchdot/open-golang/open\"\n\n\t\"github.com/docker/compose/v5/internal/tracing\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nconst DISPLAY_ERROR_TIME = 10\n\ntype KeyboardError struct {\n\terr       error\n\ttimeStart time.Time\n}\n\nfunc (ke *KeyboardError) shouldDisplay() bool {\n\treturn ke.err != nil && int(time.Since(ke.timeStart).Seconds()) < DISPLAY_ERROR_TIME\n}\n\nfunc (ke *KeyboardError) printError(height int, info string) {\n\tif ke.shouldDisplay() {\n\t\terrMessage := ke.err.Error()\n\n\t\tmoveCursor(height-1-extraLines(info)-extraLines(errMessage), 0)\n\t\tclearLine()\n\n\t\tfmt.Print(errMessage)\n\t}\n}\n\nfunc (ke *KeyboardError) addError(prefix string, err error) {\n\tke.timeStart = time.Now()\n\n\tprefix = ansiColor(CYAN, fmt.Sprintf(\"%s →\", prefix), BOLD)\n\terrorString := fmt.Sprintf(\"%s  %s\", prefix, err.Error())\n\n\tke.err = errors.New(errorString)\n}\n\nfunc (ke *KeyboardError) error() string {\n\treturn ke.err.Error()\n}\n\ntype KeyboardWatch struct {\n\tWatching bool\n\tWatcher  Feature\n}\n\n// Feature is an compose feature that can be started/stopped by a menu command\ntype Feature interface {\n\tStart(context.Context) error\n\tStop() error\n}\n\ntype KEYBOARD_LOG_LEVEL int\n\nconst (\n\tNONE  KEYBOARD_LOG_LEVEL = 0\n\tINFO  KEYBOARD_LOG_LEVEL = 1\n\tDEBUG KEYBOARD_LOG_LEVEL = 2\n)\n\ntype LogKeyboard struct {\n\tkError                KeyboardError\n\tWatch                 *KeyboardWatch\n\tDetach                func()\n\tIsDockerDesktopActive bool\n\tlogLevel              KEYBOARD_LOG_LEVEL\n\tsignalChannel         chan<- os.Signal\n}\n\nfunc NewKeyboardManager(isDockerDesktopActive bool, sc chan<- os.Signal) *LogKeyboard {\n\treturn &LogKeyboard{\n\t\tIsDockerDesktopActive: isDockerDesktopActive,\n\t\tlogLevel:              INFO,\n\t\tsignalChannel:         sc,\n\t}\n}\n\nfunc (lk *LogKeyboard) Decorate(l api.LogConsumer) api.LogConsumer {\n\treturn logDecorator{\n\t\tdecorated: l,\n\t\tBefore:    lk.clearNavigationMenu,\n\t\tAfter:     lk.PrintKeyboardInfo,\n\t}\n}\n\nfunc (lk *LogKeyboard) PrintKeyboardInfo() {\n\tif lk.logLevel == INFO {\n\t\tlk.printNavigationMenu()\n\t}\n}\n\n// Creates space to print error and menu string\nfunc (lk *LogKeyboard) createBuffer(lines int) {\n\tif lk.kError.shouldDisplay() {\n\t\textraLines := extraLines(lk.kError.error()) + 1\n\t\tlines += extraLines\n\t}\n\n\t// get the string\n\tinfoMessage := lk.navigationMenu()\n\t// calculate how many lines we need to display the menu info\n\t// might be needed a line break\n\textraLines := extraLines(infoMessage) + 1\n\tlines += extraLines\n\n\tif lines > 0 {\n\t\tallocateSpace(lines)\n\t\tmoveCursorUp(lines)\n\t}\n}\n\nfunc (lk *LogKeyboard) printNavigationMenu() {\n\toffset := 1\n\tlk.clearNavigationMenu()\n\tlk.createBuffer(offset)\n\n\tif lk.logLevel == INFO {\n\t\theight := goterm.Height()\n\t\tmenu := lk.navigationMenu()\n\n\t\tcarriageReturn()\n\t\tsaveCursor()\n\n\t\tlk.kError.printError(height, menu)\n\n\t\tmoveCursor(height-extraLines(menu), 0)\n\t\tclearLine()\n\t\tfmt.Print(menu)\n\n\t\tcarriageReturn()\n\t\trestoreCursor()\n\t}\n}\n\nfunc (lk *LogKeyboard) navigationMenu() string {\n\tvar items []string\n\tif lk.IsDockerDesktopActive {\n\t\titems = append(items, shortcutKeyColor(\"v\")+navColor(\" View in Docker Desktop\"))\n\t}\n\n\tif lk.IsDockerDesktopActive {\n\t\titems = append(items, shortcutKeyColor(\"o\")+navColor(\" View Config\"))\n\t}\n\n\tisEnabled := \" Enable\"\n\tif lk.Watch != nil && lk.Watch.Watching {\n\t\tisEnabled = \" Disable\"\n\t}\n\titems = append(items, shortcutKeyColor(\"w\")+navColor(isEnabled+\" Watch\"))\n\titems = append(items, shortcutKeyColor(\"d\")+navColor(\" Detach\"))\n\n\treturn strings.Join(items, \"   \")\n}\n\nfunc (lk *LogKeyboard) clearNavigationMenu() {\n\theight := goterm.Height()\n\tcarriageReturn()\n\tsaveCursor()\n\n\t// clearLine()\n\tfor range height {\n\t\tmoveCursorDown(1)\n\t\tclearLine()\n\t}\n\trestoreCursor()\n}\n\nfunc (lk *LogKeyboard) openDockerDesktop(ctx context.Context, project *types.Project) {\n\tif !lk.IsDockerDesktopActive {\n\t\treturn\n\t}\n\tgo func() {\n\t\t_ = tracing.EventWrapFuncForErrGroup(ctx, \"menu/gui\", tracing.SpanOptions{},\n\t\t\tfunc(ctx context.Context) error {\n\t\t\t\tlink := fmt.Sprintf(\"docker-desktop://dashboard/apps/%s\", project.Name)\n\t\t\t\terr := open.Run(link)\n\t\t\t\tif err != nil {\n\t\t\t\t\terr = fmt.Errorf(\"could not open Docker Desktop\")\n\t\t\t\t\tlk.keyboardError(\"View\", err)\n\t\t\t\t}\n\t\t\t\treturn err\n\t\t\t})()\n\t}()\n}\n\nfunc (lk *LogKeyboard) openDDComposeUI(ctx context.Context, project *types.Project) {\n\tif !lk.IsDockerDesktopActive {\n\t\treturn\n\t}\n\tgo func() {\n\t\t_ = tracing.EventWrapFuncForErrGroup(ctx, \"menu/gui/composeview\", tracing.SpanOptions{},\n\t\t\tfunc(ctx context.Context) error {\n\t\t\t\tlink := fmt.Sprintf(\"docker-desktop://dashboard/docker-compose/%s\", project.Name)\n\t\t\t\terr := open.Run(link)\n\t\t\t\tif err != nil {\n\t\t\t\t\terr = fmt.Errorf(\"could not open Docker Desktop Compose UI\")\n\t\t\t\t\tlk.keyboardError(\"View Config\", err)\n\t\t\t\t}\n\t\t\t\treturn err\n\t\t\t})()\n\t}()\n}\n\nfunc (lk *LogKeyboard) openDDWatchDocs(ctx context.Context, project *types.Project) {\n\tgo func() {\n\t\t_ = tracing.EventWrapFuncForErrGroup(ctx, \"menu/gui/watch\", tracing.SpanOptions{},\n\t\t\tfunc(ctx context.Context) error {\n\t\t\t\tlink := fmt.Sprintf(\"docker-desktop://dashboard/docker-compose/%s/watch\", project.Name)\n\t\t\t\terr := open.Run(link)\n\t\t\t\tif err != nil {\n\t\t\t\t\terr = fmt.Errorf(\"could not open Docker Desktop Compose UI\")\n\t\t\t\t\tlk.keyboardError(\"Watch Docs\", err)\n\t\t\t\t}\n\t\t\t\treturn err\n\t\t\t})()\n\t}()\n}\n\nfunc (lk *LogKeyboard) keyboardError(prefix string, err error) {\n\tlk.kError.addError(prefix, err)\n\n\tlk.printNavigationMenu()\n\ttimer1 := time.NewTimer((DISPLAY_ERROR_TIME + 1) * time.Second)\n\tgo func() {\n\t\t<-timer1.C\n\t\tlk.printNavigationMenu()\n\t}()\n}\n\nfunc (lk *LogKeyboard) ToggleWatch(ctx context.Context, options api.UpOptions) {\n\tif lk.Watch == nil {\n\t\treturn\n\t}\n\tif lk.Watch.Watching {\n\t\terr := lk.Watch.Watcher.Stop()\n\t\tif err != nil {\n\t\t\toptions.Start.Attach.Err(api.WatchLogger, err.Error())\n\t\t} else {\n\t\t\tlk.Watch.Watching = false\n\t\t}\n\t} else {\n\t\tgo func() {\n\t\t\t_ = tracing.EventWrapFuncForErrGroup(ctx, \"menu/watch\", tracing.SpanOptions{},\n\t\t\t\tfunc(ctx context.Context) error {\n\t\t\t\t\terr := lk.Watch.Watcher.Start(ctx)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\toptions.Start.Attach.Err(api.WatchLogger, err.Error())\n\t\t\t\t\t} else {\n\t\t\t\t\t\tlk.Watch.Watching = true\n\t\t\t\t\t}\n\t\t\t\t\treturn err\n\t\t\t\t})()\n\t\t}()\n\t}\n}\n\nfunc (lk *LogKeyboard) HandleKeyEvents(ctx context.Context, event keyboard.KeyEvent, project *types.Project, options api.UpOptions) {\n\tswitch kRune := event.Rune; kRune {\n\tcase 'd':\n\t\tlk.clearNavigationMenu()\n\t\tlk.Detach()\n\tcase 'v':\n\t\tlk.openDockerDesktop(ctx, project)\n\tcase 'w':\n\t\tif lk.Watch == nil {\n\t\t\t// we try to open watch docs if DD is installed\n\t\t\tif lk.IsDockerDesktopActive {\n\t\t\t\tlk.openDDWatchDocs(ctx, project)\n\t\t\t}\n\t\t\t// either way we mark menu/watch as an error\n\t\t\tgo func() {\n\t\t\t\t_ = tracing.EventWrapFuncForErrGroup(ctx, \"menu/watch\", tracing.SpanOptions{},\n\t\t\t\t\tfunc(ctx context.Context) error {\n\t\t\t\t\t\terr := fmt.Errorf(\"watch is not yet configured. Learn more: %s\", ansiColor(CYAN, \"https://docs.docker.com/compose/file-watch/\"))\n\t\t\t\t\t\tlk.keyboardError(\"Watch\", err)\n\t\t\t\t\t\treturn err\n\t\t\t\t\t})()\n\t\t\t}()\n\t\t}\n\t\tlk.ToggleWatch(ctx, options)\n\tcase 'o':\n\t\tlk.openDDComposeUI(ctx, project)\n\t}\n\tswitch key := event.Key; key {\n\tcase keyboard.KeyCtrlC:\n\t\t_ = keyboard.Close()\n\t\tlk.clearNavigationMenu()\n\t\tshowCursor()\n\n\t\tlk.logLevel = NONE\n\t\t// will notify main thread to kill and will handle gracefully\n\t\tlk.signalChannel <- syscall.SIGINT\n\tcase keyboard.KeyCtrlZ:\n\t\thandleCtrlZ()\n\tcase keyboard.KeyEnter:\n\t\tnewLine()\n\t\tlk.printNavigationMenu()\n\t}\n}\n\nfunc (lk *LogKeyboard) EnableWatch(enabled bool, watcher Feature) {\n\tlk.Watch = &KeyboardWatch{\n\t\tWatching: enabled,\n\t\tWatcher:  watcher,\n\t}\n}\n\nfunc (lk *LogKeyboard) EnableDetach(detach func()) {\n\tlk.Detach = detach\n}\n\nfunc allocateSpace(lines int) {\n\tfor range lines {\n\t\tclearLine()\n\t\tnewLine()\n\t\tcarriageReturn()\n\t}\n}\n\nfunc extraLines(s string) int {\n\treturn int(math.Floor(float64(lenAnsi(s)) / float64(goterm.Width())))\n}\n\nfunc shortcutKeyColor(key string) string {\n\tforeground := \"38;2\"\n\tblack := \"0;0;0\"\n\tbackground := \"48;2\"\n\twhite := \"255;255;255\"\n\treturn ansiColor(foreground+\";\"+black+\";\"+background+\";\"+white, key, BOLD)\n}\n\nfunc navColor(key string) string {\n\treturn ansiColor(FAINT, key)\n}\n"
  },
  {
    "path": "cmd/formatter/shortcut_unix.go",
    "content": "//go:build !windows\n\n/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage formatter\n\nimport \"syscall\"\n\nfunc handleCtrlZ() {\n\t_ = syscall.Kill(0, syscall.SIGSTOP)\n}\n"
  },
  {
    "path": "cmd/formatter/shortcut_windows.go",
    "content": "//go:build windows\n\n/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage formatter\n\n// handleCtrlZ is a no-op on Windows as SIGSTOP is not supported\nfunc handleCtrlZ() {\n\t// Windows doesn't support SIGSTOP/SIGCONT signals\n\t// Ctrl+Z behavior is handled differently by the Windows terminal\n}\n"
  },
  {
    "path": "cmd/main.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage main\n\nimport (\n\t\"os\"\n\n\tdockercli \"github.com/docker/cli/cli\"\n\t\"github.com/docker/cli/cli-plugins/metadata\"\n\t\"github.com/docker/cli/cli-plugins/plugin\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/cmd/cmdtrace\"\n\t\"github.com/docker/compose/v5/cmd/compatibility\"\n\tcommands \"github.com/docker/compose/v5/cmd/compose\"\n\t\"github.com/docker/compose/v5/cmd/prompt\"\n\t\"github.com/docker/compose/v5/internal\"\n\t\"github.com/docker/compose/v5/pkg/compose\"\n)\n\nfunc pluginMain() {\n\tplugin.Run(\n\t\tfunc(cli command.Cli) *cobra.Command {\n\t\t\tbackendOptions := &commands.BackendOptions{\n\t\t\t\tOptions: []compose.Option{\n\t\t\t\t\tcompose.WithPrompt(prompt.NewPrompt(cli.In(), cli.Out()).Confirm),\n\t\t\t\t},\n\t\t\t}\n\n\t\t\tcmd := commands.RootCommand(cli, backendOptions)\n\t\t\toriginalPreRunE := cmd.PersistentPreRunE\n\t\t\tcmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {\n\t\t\t\t// initialize the cli instance\n\t\t\t\tif err := plugin.PersistentPreRunE(cmd, args); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tif err := cmdtrace.Setup(cmd, cli, os.Args[1:]); err != nil {\n\t\t\t\t\tlogrus.Debugf(\"failed to enable tracing: %v\", err)\n\t\t\t\t}\n\n\t\t\t\tif originalPreRunE != nil {\n\t\t\t\t\treturn originalPreRunE(cmd, args)\n\t\t\t\t}\n\t\t\t\treturn nil\n\t\t\t}\n\n\t\t\tcmd.SetFlagErrorFunc(func(c *cobra.Command, err error) error {\n\t\t\t\treturn dockercli.StatusError{\n\t\t\t\t\tStatusCode: 1,\n\t\t\t\t\tStatus:     err.Error(),\n\t\t\t\t}\n\t\t\t})\n\t\t\treturn cmd\n\t\t},\n\t\tmetadata.Metadata{\n\t\t\tSchemaVersion: \"0.1.0\",\n\t\t\tVendor:        \"Docker Inc.\",\n\t\t\tVersion:       internal.Version,\n\t\t},\n\t\tcommand.WithUserAgent(\"compose/\"+internal.Version),\n\t)\n}\n\nfunc main() {\n\tif plugin.RunningStandalone() {\n\t\tos.Args = append([]string{\"docker\"}, compatibility.Convert(os.Args[1:])...)\n\t}\n\tpluginMain()\n}\n"
  },
  {
    "path": "cmd/prompt/prompt.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage prompt\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\n\t\"github.com/AlecAivazis/survey/v2\"\n\t\"github.com/docker/cli/cli/streams\"\n\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\n//go:generate mockgen -destination=./prompt_mock.go -self_package \"github.com/docker/compose/v5/pkg/prompt\" -package=prompt . UI\n\n// UI - prompt user input\ntype UI interface {\n\tConfirm(message string, defaultValue bool) (bool, error)\n}\n\nfunc NewPrompt(stdin *streams.In, stdout *streams.Out) UI {\n\tif stdin.IsTerminal() {\n\t\treturn User{stdin: streamsFileReader{stdin}, stdout: streamsFileWriter{stdout}}\n\t}\n\treturn Pipe{stdin: stdin, stdout: stdout}\n}\n\n// User - in a terminal\ntype User struct {\n\tstdout streamsFileWriter\n\tstdin  streamsFileReader\n}\n\n// adapt streams.Out to terminal.FileWriter\ntype streamsFileWriter struct {\n\tstream *streams.Out\n}\n\nfunc (s streamsFileWriter) Write(p []byte) (n int, err error) {\n\treturn s.stream.Write(p)\n}\n\nfunc (s streamsFileWriter) Fd() uintptr {\n\treturn s.stream.FD()\n}\n\n// adapt streams.In to terminal.FileReader\ntype streamsFileReader struct {\n\tstream *streams.In\n}\n\nfunc (s streamsFileReader) Read(p []byte) (n int, err error) {\n\treturn s.stream.Read(p)\n}\n\nfunc (s streamsFileReader) Fd() uintptr {\n\treturn s.stream.FD()\n}\n\n// Confirm asks for yes or no input\nfunc (u User) Confirm(message string, defaultValue bool) (bool, error) {\n\tqs := &survey.Confirm{\n\t\tMessage: message,\n\t\tDefault: defaultValue,\n\t}\n\tvar b bool\n\terr := survey.AskOne(qs, &b, func(options *survey.AskOptions) error {\n\t\toptions.Stdio.In = u.stdin\n\t\toptions.Stdio.Out = u.stdout\n\t\treturn nil\n\t})\n\treturn b, err\n}\n\n// Pipe - aggregates prompt methods\ntype Pipe struct {\n\tstdout io.Writer\n\tstdin  io.Reader\n}\n\n// Confirm asks for yes or no input\nfunc (u Pipe) Confirm(message string, defaultValue bool) (bool, error) {\n\t_, _ = fmt.Fprint(u.stdout, message)\n\tvar answer string\n\t_, _ = fmt.Fscanln(u.stdin, &answer)\n\treturn utils.StringToBool(answer), nil\n}\n"
  },
  {
    "path": "cmd/prompt/prompt_mock.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: github.com/docker/compose-cli/pkg/prompt (interfaces: UI)\n\n// Package prompt is a generated GoMock package.\npackage prompt\n\nimport (\n\treflect \"reflect\"\n\n\tgomock \"go.uber.org/mock/gomock\"\n)\n\n// MockUI is a mock of UI interface\ntype MockUI struct {\n\tctrl     *gomock.Controller\n\trecorder *MockUIMockRecorder\n}\n\n// MockUIMockRecorder is the mock recorder for MockUI\ntype MockUIMockRecorder struct {\n\tmock *MockUI\n}\n\n// NewMockUI creates a new mock instance\nfunc NewMockUI(ctrl *gomock.Controller) *MockUI {\n\tmock := &MockUI{ctrl: ctrl}\n\tmock.recorder = &MockUIMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockUI) EXPECT() *MockUIMockRecorder {\n\treturn m.recorder\n}\n\n// Confirm mocks base method\nfunc (m *MockUI) Confirm(arg0 string, arg1 bool) (bool, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Confirm\", arg0, arg1)\n\tret0, _ := ret[0].(bool)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Confirm indicates an expected call of Confirm\nfunc (mr *MockUIMockRecorder) Confirm(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Confirm\", reflect.TypeOf((*MockUI)(nil).Confirm), arg0, arg1)\n}\n\n// Input mocks base method\nfunc (m *MockUI) Input(arg0, arg1 string) (string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Input\", arg0, arg1)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Input indicates an expected call of Input\nfunc (mr *MockUIMockRecorder) Input(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Input\", reflect.TypeOf((*MockUI)(nil).Input), arg0, arg1)\n}\n\n// Password mocks base method\nfunc (m *MockUI) Password(arg0 string) (string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Password\", arg0)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Password indicates an expected call of Password\nfunc (mr *MockUIMockRecorder) Password(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Password\", reflect.TypeOf((*MockUI)(nil).Password), arg0)\n}\n\n// Select mocks base method\nfunc (m *MockUI) Select(arg0 string, arg1 []string) (int, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Select\", arg0, arg1)\n\tret0, _ := ret[0].(int)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Select indicates an expected call of Select\nfunc (mr *MockUIMockRecorder) Select(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Select\", reflect.TypeOf((*MockUI)(nil).Select), arg0, arg1)\n}\n"
  },
  {
    "path": "codecov.yml",
    "content": "coverage:\n  status:\n    project:\n      default:\n        informational: true\n        target: auto\n        threshold: 2%\n    patch:\n      default:\n        informational: true\n\ncomment:\n  require_changes: true\n\nignore:\n  - \"packaging\"\n  - \"docs\"\n  - \"bin\"\n  - \"e2e\"\n  - \"pkg/e2e\"\n  - \"**/*_test.go\"\n"
  },
  {
    "path": "docker-bake.hcl",
    "content": "// Copyright 2022 Docker Compose CLI authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//    http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\nvariable \"GO_VERSION\" {\n  # default ARG value set in Dockerfile\n  default = null\n}\n\nvariable \"BUILD_TAGS\" {\n  default = \"e2e\"\n}\n\nvariable \"DOCS_FORMATS\" {\n  default = \"md,yaml\"\n}\n\n# Defines the output folder to override the default behavior.\n# See Makefile for details, this is generally only useful for\n# the packaging scripts and care should be taken to not break\n# them.\nvariable \"DESTDIR\" {\n  default = \"\"\n}\nfunction \"outdir\" {\n  params = [defaultdir]\n  result = DESTDIR != \"\" ? DESTDIR : \"${defaultdir}\"\n}\n\n# Special target: https://github.com/docker/metadata-action#bake-definition\ntarget \"meta-helper\" {}\n\ntarget \"_common\" {\n  args = {\n    GO_VERSION = GO_VERSION\n    BUILD_TAGS = BUILD_TAGS\n    BUILDKIT_CONTEXT_KEEP_GIT_DIR = 1\n  }\n}\n\ngroup \"default\" {\n  targets = [\"binary\"]\n}\n\ngroup \"validate\" {\n  targets = [\"lint\", \"vendor-validate\", \"license-validate\"]\n}\n\ntarget \"lint\" {\n  inherits = [\"_common\"]\n  target = \"lint\"\n  output = [\"type=cacheonly\"]\n}\n\ntarget \"license-validate\" {\n  target = \"license-validate\"\n  output = [\"type=cacheonly\"]\n}\n\ntarget \"license-update\" {\n  target = \"license-update\"\n  output = [\".\"]\n}\n\ntarget \"vendor-validate\" {\n  inherits = [\"_common\"]\n  target = \"vendor-validate\"\n  output = [\"type=cacheonly\"]\n}\n\ntarget \"vendor-update\" {\n  inherits = [\"_common\"]\n  target = \"vendor-update\"\n  output = [\".\"]\n}\n\ntarget \"test\" {\n  inherits = [\"_common\"]\n  target = \"test-coverage\"\n  output = [outdir(\"./bin/coverage/unit\")]\n}\n\ntarget \"binary-with-coverage\" {\n  inherits = [\"_common\"]\n  target = \"binary\"\n  args = {\n    BUILD_FLAGS = \"-cover -covermode=atomic\"\n  }\n  output = [outdir(\"./bin/build\")]\n  platforms = [\"local\"]\n}\n\ntarget \"binary\" {\n  inherits = [\"_common\"]\n  target = \"binary\"\n  output = [outdir(\"./bin/build\")]\n  platforms = [\"local\"]\n}\n\ntarget \"binary-cross\" {\n  inherits = [\"binary\"]\n  platforms = [\n    \"darwin/amd64\",\n    \"darwin/arm64\",\n    \"linux/amd64\",\n    \"linux/arm/v6\",\n    \"linux/arm/v7\",\n    \"linux/arm64\",\n    \"linux/ppc64le\",\n    \"linux/riscv64\",\n    \"linux/s390x\",\n    \"windows/amd64\",\n    \"windows/arm64\"\n  ]\n}\n\ntarget \"release\" {\n  inherits = [\"binary-cross\"]\n  target = \"release\"\n  output = [outdir(\"./bin/release\")]\n}\n\ntarget \"docs-validate\" {\n  inherits = [\"_common\"]\n  target = \"docs-validate\"\n  output = [\"type=cacheonly\"]\n}\n\ntarget \"docs-update\" {\n  inherits = [\"_common\"]\n  target = \"docs-update\"\n  output = [\"./docs\"]\n}\n\ntarget \"image-cross\" {\n  inherits = [\"meta-helper\", \"binary-cross\"]\n  output = [\"type=image\"]\n}\n"
  },
  {
    "path": "docs/examples/provider.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage main\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"os\"\n\t\"time\"\n\n\t\"github.com/spf13/cobra\"\n\t\"github.com/spf13/pflag\"\n)\n\nfunc main() {\n\tcmd := &cobra.Command{\n\t\tShort: \"Compose Provider Example\",\n\t\tUse:   \"demo\",\n\t}\n\tcmd.AddCommand(composeCommand())\n\terr := cmd.Execute()\n\tif err != nil {\n\t\tfmt.Fprintln(os.Stderr, err)\n\t\tos.Exit(1)\n\t}\n}\n\ntype options struct {\n\tdb   string\n\tsize int\n}\n\nfunc composeCommand() *cobra.Command {\n\tc := &cobra.Command{\n\t\tUse:              \"compose EVENT\",\n\t\tTraverseChildren: true,\n\t}\n\tc.PersistentFlags().String(\"project-name\", \"\", \"compose project name\") // unused\n\n\tvar options options\n\tupCmd := &cobra.Command{\n\t\tUse: \"up\",\n\t\tRun: func(_ *cobra.Command, args []string) {\n\t\t\tup(options, args)\n\t\t},\n\t\tArgs: cobra.ExactArgs(1),\n\t}\n\n\tupCmd.Flags().StringVar(&options.db, \"type\", \"\", \"Database type (mysql, postgres, etc.)\")\n\t_ = upCmd.MarkFlagRequired(\"type\")\n\tupCmd.Flags().IntVar(&options.size, \"size\", 10, \"Database size in GB\")\n\tupCmd.Flags().String(\"name\", \"\", \"Name of the database to be created\")\n\t_ = upCmd.MarkFlagRequired(\"name\")\n\n\tdownCmd := &cobra.Command{\n\t\tUse:  \"down\",\n\t\tRun:  down,\n\t\tArgs: cobra.ExactArgs(1),\n\t}\n\tdownCmd.Flags().String(\"name\", \"\", \"Name of the database to be deleted\")\n\t_ = downCmd.MarkFlagRequired(\"name\")\n\n\tc.AddCommand(upCmd, downCmd)\n\tc.AddCommand(metadataCommand(upCmd, downCmd))\n\treturn c\n}\n\nconst lineSeparator = \"\\n\"\n\nfunc up(options options, args []string) {\n\tservicename := args[0]\n\tfmt.Printf(`{ \"type\": \"debug\", \"message\": \"Starting %s\" }%s`, servicename, lineSeparator)\n\n\tfor i := 0; i < options.size; i += 10 {\n\t\ttime.Sleep(1 * time.Second)\n\t\tfmt.Printf(`{ \"type\": \"info\", \"message\": \"Processing ... %d%%\" }%s`, i*100/options.size, lineSeparator)\n\t}\n\tfmt.Printf(`{ \"type\": \"setenv\", \"message\": \"URL=https://magic.cloud/%s\" }%s`, servicename, lineSeparator)\n}\n\nfunc down(_ *cobra.Command, _ []string) {\n\tfmt.Printf(`{ \"type\": \"error\", \"message\": \"Permission error\" }%s`, lineSeparator)\n}\n\nfunc metadataCommand(upCmd, downCmd *cobra.Command) *cobra.Command {\n\treturn &cobra.Command{\n\t\tUse: \"metadata\",\n\t\tRun: func(cmd *cobra.Command, _ []string) {\n\t\t\tmetadata(upCmd, downCmd)\n\t\t},\n\t\tArgs: cobra.NoArgs,\n\t}\n}\n\nfunc metadata(upCmd, downCmd *cobra.Command) {\n\tmetadata := ProviderMetadata{}\n\tmetadata.Description = \"Manage services on AwesomeCloud\"\n\tmetadata.Up = commandParameters(upCmd)\n\tmetadata.Down = commandParameters(downCmd)\n\tjsonMetadata, err := json.Marshal(metadata)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tfmt.Println(string(jsonMetadata))\n}\n\nfunc commandParameters(cmd *cobra.Command) CommandMetadata {\n\tcmdMetadata := CommandMetadata{}\n\tcmd.Flags().VisitAll(func(f *pflag.Flag) {\n\t\t_, isRequired := f.Annotations[cobra.BashCompOneRequiredFlag]\n\t\tcmdMetadata.Parameters = append(cmdMetadata.Parameters, Metadata{\n\t\t\tName:        f.Name,\n\t\t\tDescription: f.Usage,\n\t\t\tRequired:    isRequired,\n\t\t\tType:        f.Value.Type(),\n\t\t\tDefault:     f.DefValue,\n\t\t})\n\t})\n\treturn cmdMetadata\n}\n\ntype ProviderMetadata struct {\n\tDescription string          `json:\"description\"`\n\tUp          CommandMetadata `json:\"up\"`\n\tDown        CommandMetadata `json:\"down\"`\n}\n\ntype CommandMetadata struct {\n\tParameters []Metadata `json:\"parameters\"`\n}\n\ntype Metadata struct {\n\tName        string `json:\"name\"`\n\tDescription string `json:\"description\"`\n\tRequired    bool   `json:\"required\"`\n\tType        string `json:\"type\"`\n\tDefault     string `json:\"default,omitempty\"`\n}\n"
  },
  {
    "path": "docs/extension.md",
    "content": "# About\n\nThe Compose application model defines `service` as an abstraction for a computing unit managing (a subset of)\napplication needs, which can interact with other services by relying on network(s). Docker Compose is designed \nto use the Docker Engine (\"Moby\") API to manage services as containers, but the abstraction _could_ also cover \nmany other runtimes, typically cloud services or services natively provided by host.\n\nThe Compose extensibility model has been designed to extend the `service` support to runtimes accessible through\nthird-party tooling.\n\n# Architecture\n\nCompose extensibility relies on the `provider` attribute to select the actual binary responsible for managing\nthe resource(s) needed to run a service.\n\n```yaml\n  database:\n    provider:\n      type: awesomecloud\n      options:\n        type: mysql\n        size: 256\n        name: myAwesomeCloudDB\n```\n\n`provider.type` tells Compose the binary to run, which can be either:\n- Another Docker CLI plugin (typically, `model` to run `docker-model`)\n- An executable in user's `PATH`\n\nIf `provider.type` doesn't resolve into any of those, Compose will report an error and interrupt the `up` command.\n\nTo be a valid Compose extension, provider command *MUST* accept a `compose` command (which can be hidden)\nwith subcommands `up` and `down`.\n\n## Up lifecycle\n\nTo execute an application's `up` lifecycle, Compose executes the provider's `compose up` command, passing \nthe project name, service name, and additional options. The `provider.options` are translated \ninto command line flags. For example:\n```console\nawesomecloud compose --project-name <NAME> up --type=mysql --size=256 \"database\"\n```\n\n> __Note:__ `project-name` _should_ be used by the provider to tag resources\n> set for project, so that later execution with `down` subcommand releases \n> all allocated resources set for the project.\n\n## Communication with Compose\n\nProviders can interact with Compose using `stdout` as a channel, sending JSON line delimited messages.\nJSON messages MUST include a `type` and a `message` attribute.\n```json\n{ \"type\": \"info\", \"message\": \"preparing mysql ...\" }\n```\n\n`type` can be either:\n- `info`: Reports status updates to the user. Compose will render message as the service state in the progress UI\n- `error`: Lets the user know something went wrong with details about the error. Compose will render the message as the reason for the service failure.\n- `setenv`: Lets the plugin tell Compose how dependent services can access the created resource. See next section for further details.\n- `debug`: Those messages could help debugging the provider, but are not rendered to the user by default. They are rendered when Compose is started with `--verbose` flag.\n\n```mermaid\nsequenceDiagram\n    Shell->>Compose: docker compose up\n    Compose->>Provider: compose up --project-name=xx --foo=bar \"database\"\n    Provider--)Compose: json { \"info\": \"pulling 25%\" }\n    Compose-)Shell: pulling 25%\n    Provider--)Compose: json { \"info\": \"pulling 50%\" }\n    Compose-)Shell: pulling 50%\n    Provider--)Compose: json { \"info\": \"pulling 75%\" }\n    Compose-)Shell: pulling 75%\n    Provider--)Compose: json { \"setenv\": \"URL=http://cloud.com/abcd:1234\" }\n    Compose-)Compose: set DATABASE_URL\n    Provider-)Compose: EOF (command complete) exit 0\n    Compose-)Shell: service started\n```\n\n## Connection to a service managed by a provider\n\nA service in the Compose application can declare dependency on a service managed by an external provider: \n\n```yaml\nservices:\n  app:\n    image: myapp \n    depends_on:\n       - database\n\n  database:\n    provider:\n      type: awesomecloud\n```\n\nWhen the provider command sends a `setenv` JSON message, Compose injects the specified variable into any dependent service,\nautomatically prefixing it with the service name. For example, if `awesomecloud compose up` returns:\n```json\n{\"type\": \"setenv\", \"message\": \"URL=https://awesomecloud.com/db:1234\"}\n```\nThen the `app` service, which depends on the service managed by the provider, will receive a `DATABASE_URL` environment variable injected\ninto its runtime environment.\n\n> __Note:__  The `compose up` provider command _MUST_ be idempotent. If resource is already running, the command _MUST_ set\n> the same environment variables to ensure consistent configuration of dependent services.\n\n## Down lifecycle\n\n`down` lifecycle is equivalent to `up` with the `<provider> compose --project-name <NAME> down <SERVICE>` command.\nThe provider is responsible for releasing all resources associated with the service.\n\n## Provide metadata about options\n\nCompose extensions *MAY* optionally implement a `metadata` subcommand to provide information about the parameters accepted by the `up` and `down` commands.  \n\nThe `metadata` subcommand takes no parameters and returns a JSON structure on the `stdout` channel that describes the parameters accepted by both the `up` and `down` commands, including whether each parameter is mandatory or optional.\n\n```console\nawesomecloud compose metadata\n```\n\nThe expected JSON output format is:\n```json\n{\n  \"description\": \"Manage services on AwesomeCloud\",\n  \"up\": {\n    \"parameters\": [\n      {\n        \"name\": \"type\",\n        \"description\": \"Database type (mysql, postgres, etc.)\",\n        \"required\": true,\n        \"type\": \"string\"\n      },\n      {\n        \"name\": \"size\",\n        \"description\": \"Database size in GB\",\n        \"required\": false,\n        \"type\": \"integer\",\n        \"default\": \"10\"\n      },\n      {\n        \"name\": \"name\",\n        \"description\": \"Name of the database to be created\",\n        \"required\": true,\n        \"type\": \"string\"\n      }\n    ]\n  },\n  \"down\": {\n    \"parameters\": [\n      {\n        \"name\": \"name\",\n        \"description\": \"Name of the database to be removed\",\n        \"required\": true,\n        \"type\": \"string\"\n      }\n    ]\n  }\n}\n```\nThe top elements are:\n- `description`: Human-readable description of the provider\n- `up`: Object describing the parameters accepted by the `up` command\n- `down`: Object describing the parameters accepted by the `down` command\n\nAnd for each command parameter, you should include the following properties:\n- `name`: The parameter name (without `--` prefix)\n- `description`: Human-readable description of the parameter\n- `required`: Boolean indicating if the parameter is mandatory\n- `type`: Parameter type (`string`, `integer`, `boolean`, etc.)\n- `default`: Default value (optional, only for non-required parameters)\n- `enum`: List of possible values supported by the parameter separated by `,` (optional, only for parameters with a limited set of values)\n\nThis metadata allows Compose and other tools to understand the provider's interface and provide better user experience, such as validation, auto-completion, and documentation generation.\n\n## Examples\n\nSee [example](examples/provider.go) for illustration on implementing this API in a command line \n"
  },
  {
    "path": "docs/reference/compose.md",
    "content": "\n# docker compose\n\n```text\ndocker compose [-f <arg>...] [options] [COMMAND] [ARGS...]\n```\n\n<!---MARKER_GEN_START-->\nDefine and run multi-container applications with Docker\n\n### Subcommands\n\n| Name                            | Description                                                                             |\n|:--------------------------------|:----------------------------------------------------------------------------------------|\n| [`attach`](compose_attach.md)   | Attach local standard input, output, and error streams to a service's running container |\n| [`bridge`](compose_bridge.md)   | Convert compose files into another model                                                |\n| [`build`](compose_build.md)     | Build or rebuild services                                                               |\n| [`commit`](compose_commit.md)   | Create a new image from a service container's changes                                   |\n| [`config`](compose_config.md)   | Parse, resolve and render compose file in canonical format                              |\n| [`cp`](compose_cp.md)           | Copy files/folders between a service container and the local filesystem                 |\n| [`create`](compose_create.md)   | Creates containers for a service                                                        |\n| [`down`](compose_down.md)       | Stop and remove containers, networks                                                    |\n| [`events`](compose_events.md)   | Receive real time events from containers                                                |\n| [`exec`](compose_exec.md)       | Execute a command in a running container                                                |\n| [`export`](compose_export.md)   | Export a service container's filesystem as a tar archive                                |\n| [`images`](compose_images.md)   | List images used by the created containers                                              |\n| [`kill`](compose_kill.md)       | Force stop service containers                                                           |\n| [`logs`](compose_logs.md)       | View output from containers                                                             |\n| [`ls`](compose_ls.md)           | List running compose projects                                                           |\n| [`pause`](compose_pause.md)     | Pause services                                                                          |\n| [`port`](compose_port.md)       | Print the public port for a port binding                                                |\n| [`ps`](compose_ps.md)           | List containers                                                                         |\n| [`publish`](compose_publish.md) | Publish compose application                                                             |\n| [`pull`](compose_pull.md)       | Pull service images                                                                     |\n| [`push`](compose_push.md)       | Push service images                                                                     |\n| [`restart`](compose_restart.md) | Restart service containers                                                              |\n| [`rm`](compose_rm.md)           | Removes stopped service containers                                                      |\n| [`run`](compose_run.md)         | Run a one-off command on a service                                                      |\n| [`scale`](compose_scale.md)     | Scale services                                                                          |\n| [`start`](compose_start.md)     | Start services                                                                          |\n| [`stats`](compose_stats.md)     | Display a live stream of container(s) resource usage statistics                         |\n| [`stop`](compose_stop.md)       | Stop services                                                                           |\n| [`top`](compose_top.md)         | Display the running processes                                                           |\n| [`unpause`](compose_unpause.md) | Unpause services                                                                        |\n| [`up`](compose_up.md)           | Create and start containers                                                             |\n| [`version`](compose_version.md) | Show the Docker Compose version information                                             |\n| [`volumes`](compose_volumes.md) | List volumes                                                                            |\n| [`wait`](compose_wait.md)       | Block until containers of all (or specified) services stop.                             |\n| [`watch`](compose_watch.md)     | Watch build context for service and rebuild/refresh containers when files are updated   |\n\n\n### Options\n\n| Name                   | Type          | Default | Description                                                                                         |\n|:-----------------------|:--------------|:--------|:----------------------------------------------------------------------------------------------------|\n| `--all-resources`      | `bool`        |         | Include all resources, even those not used by services                                              |\n| `--ansi`               | `string`      | `auto`  | Control when to print ANSI control characters (\"never\"\\|\"always\"\\|\"auto\")                           |\n| `--compatibility`      | `bool`        |         | Run compose in backward compatibility mode                                                          |\n| `--dry-run`            | `bool`        |         | Execute command in dry run mode                                                                     |\n| `--env-file`           | `stringArray` |         | Specify an alternate environment file                                                               |\n| `-f`, `--file`         | `stringArray` |         | Compose configuration files                                                                         |\n| `--parallel`           | `int`         | `-1`    | Control max parallelism, -1 for unlimited                                                           |\n| `--profile`            | `stringArray` |         | Specify a profile to enable                                                                         |\n| `--progress`           | `string`      |         | Set type of progress output (auto, tty, plain, json, quiet)                                         |\n| `--project-directory`  | `string`      |         | Specify an alternate working directory<br>(default: the path of the, first specified, Compose file) |\n| `-p`, `--project-name` | `string`      |         | Project name                                                                                        |\n\n\n<!---MARKER_GEN_END-->\n\n## Examples\n\n### Use `-f` to specify the name and path of one or more Compose files\nUse the `-f` flag to specify the location of a Compose [configuration file](/reference/compose-file/).\n\n#### Specifying multiple Compose files\nYou can supply multiple `-f` configuration files. When you supply multiple files, Compose combines them into a single\nconfiguration. Compose builds the configuration in the order you supply the files. Subsequent files override and add\nto their predecessors.\n\nFor example, consider this command line:\n\n```console\n$ docker compose -f compose.yaml -f compose.admin.yaml run backup_db\n```\n\nThe `compose.yaml` file might specify a `webapp` service.\n\n```yaml\nservices:\n  webapp:\n    image: examples/web\n    ports:\n      - \"8000:8000\"\n    volumes:\n      - \"/data\"\n```\nIf the `compose.admin.yaml` also specifies this same service, any matching fields override the previous file.\nNew values, add to the `webapp` service configuration.\n\n```yaml\nservices:\n  webapp:\n    build: .\n    environment:\n      - DEBUG=1\n```\n\nWhen you use multiple Compose files, all paths in the files are relative to the first configuration file specified\nwith `-f`. You can use the `--project-directory` option to override this base path.\n\nUse a `-f` with `-` (dash) as the filename to read the configuration from stdin. When stdin is used all paths in the\nconfiguration are relative to the current working directory.\n\nThe `-f` flag is optional. If you don’t provide this flag on the command line, Compose traverses the working directory\nand its parent directories looking for a `compose.yaml` or `docker-compose.yaml` file.\n\n#### Specifying a path to a single Compose file\nYou can use the `-f` flag to specify a path to a Compose file that is not located in the current directory, either\nfrom the command line or by setting up a `COMPOSE_FILE` environment variable in your shell or in an environment file.\n\nFor an example of using the `-f` option at the command line, suppose you are running the Compose Rails sample, and\nhave a `compose.yaml` file in a directory called `sandbox/rails`. You can use a command like `docker compose pull` to\nget the postgres image for the db service from anywhere by using the `-f` flag as follows:\n\n```console\n$ docker compose -f ~/sandbox/rails/compose.yaml pull db\n```\n\n#### Using an OCI published artifact\nYou can use the `-f` flag with the `oci://` prefix to reference a Compose file that has been published to an OCI registry.\nThis allows you to distribute and version your Compose configurations as OCI artifacts.\n\nTo use a Compose file from an OCI registry:\n\n```console\n$ docker compose -f oci://registry.example.com/my-compose-project:latest up\n```\n\nYou can also combine OCI artifacts with local files:\n\n```console\n$ docker compose -f oci://registry.example.com/my-compose-project:v1.0 -f compose.override.yaml up\n```\n\nThe OCI artifact must contain a valid Compose file. You can publish Compose files to an OCI registry using the\n`docker compose publish` command.\n\n#### Using a git repository\nYou can use the `-f` flag to reference a Compose file from a git repository. Compose supports various git URL formats:\n\nUsing HTTPS:\n```console\n$ docker compose -f https://github.com/user/repo.git up\n```\n\nUsing SSH:\n```console\n$ docker compose -f git@github.com:user/repo.git up\n```\n\nYou can specify a specific branch, tag, or commit:\n```console\n$ docker compose -f https://github.com/user/repo.git@main up\n$ docker compose -f https://github.com/user/repo.git@v1.0.0 up\n$ docker compose -f https://github.com/user/repo.git@abc123 up\n```\n\nYou can also specify a subdirectory within the repository:\n```console\n$ docker compose -f https://github.com/user/repo.git#main:path/to/compose.yaml up\n```\n\nWhen using git resources, Compose will clone the repository and use the specified Compose file. You can combine\ngit resources with local files:\n\n```console\n$ docker compose -f https://github.com/user/repo.git -f compose.override.yaml up\n```\n\n### Use `-p` to specify a project name\n\nEach configuration has a project name. Compose sets the project name using\nthe following mechanisms, in order of precedence:\n- The `-p` command line flag\n- The `COMPOSE_PROJECT_NAME` environment variable\n- The top level `name:` variable from the config file (or the last `name:`\nfrom a series of config files specified using `-f`)\n- The `basename` of the project directory containing the config file (or\ncontaining the first config file specified using `-f`)\n- The `basename` of the current directory if no config file is specified\nProject names must contain only lowercase letters, decimal digits, dashes,\nand underscores, and must begin with a lowercase letter or decimal digit. If\nthe `basename` of the project directory or current directory violates this\nconstraint, you must use one of the other mechanisms.\n\n```console\n$ docker compose -p my_project ps -a\nNAME                 SERVICE    STATUS     PORTS\nmy_project_demo_1    demo       running\n\n$ docker compose -p my_project logs\ndemo_1  | PING localhost (127.0.0.1): 56 data bytes\ndemo_1  | 64 bytes from 127.0.0.1: seq=0 ttl=64 time=0.095 ms\n```\n\n### Use profiles to enable optional services\n\nUse `--profile` to specify one or more active profiles\nCalling `docker compose --profile frontend up` starts the services with the profile `frontend` and services\nwithout any specified profiles.\nYou can also enable multiple profiles, e.g. with `docker compose --profile frontend --profile debug up` the profiles `frontend` and `debug` is enabled.\n\nProfiles can also be set by `COMPOSE_PROFILES` environment variable.\n\n### Configuring parallelism\n\nUse `--parallel` to specify the maximum level of parallelism for concurrent engine calls.\nCalling `docker compose --parallel 1 pull` pulls the pullable images defined in the Compose file\none at a time. This can also be used to control build concurrency.\n\nParallelism can also be set by the `COMPOSE_PARALLEL_LIMIT` environment variable.\n\n### Set up environment variables\n\nYou can set environment variables for various docker compose options, including the `-f`, `-p` and `--profiles` flags.\n\nSetting the `COMPOSE_FILE` environment variable is equivalent to passing the `-f` flag,\n`COMPOSE_PROJECT_NAME` environment variable does the same as the `-p` flag,\n`COMPOSE_PROFILES` environment variable is equivalent to the `--profiles` flag\nand `COMPOSE_PARALLEL_LIMIT` does the same as the `--parallel` flag.\n\nIf flags are explicitly set on the command line, the associated environment variable is ignored.\n\nSetting the `COMPOSE_IGNORE_ORPHANS` environment variable to `true` stops docker compose from detecting orphaned\ncontainers for the project.\n\nSetting the `COMPOSE_MENU` environment variable to `false` disables the helper menu when running `docker compose up`\nin attached mode. Alternatively, you can also run `docker compose up --menu=false` to disable the helper menu.\n\n### Use Dry Run mode to test your command\n\nUse `--dry-run` flag to test a command without changing your application stack state.\nDry Run mode shows you all the steps Compose applies when executing a command, for example:\n```console\n$ docker compose --dry-run up --build -d\n[+] Pulling 1/1\n ✔ DRY-RUN MODE -  db Pulled                                                                                                                                                                                                               0.9s\n[+] Running 10/8\n ✔ DRY-RUN MODE -    build service backend                                                                                                                                                                                                 0.0s\n ✔ DRY-RUN MODE -  ==> ==> writing image dryRun-754a08ddf8bcb1cf22f310f09206dd783d42f7dd                                                                                                                                                   0.0s\n ✔ DRY-RUN MODE -  ==> ==> naming to nginx-golang-mysql-backend                                                                                                                                                                            0.0s\n ✔ DRY-RUN MODE -  Network nginx-golang-mysql_default                                    Created                                                                                                                                           0.0s\n ✔ DRY-RUN MODE -  Container nginx-golang-mysql-db-1                                     Created                                                                                                                                           0.0s\n ✔ DRY-RUN MODE -  Container nginx-golang-mysql-backend-1                                Created                                                                                                                                           0.0s\n ✔ DRY-RUN MODE -  Container nginx-golang-mysql-proxy-1                                  Created                                                                                                                                           0.0s\n ✔ DRY-RUN MODE -  Container nginx-golang-mysql-db-1                                     Healthy                                                                                                                                           0.5s\n ✔ DRY-RUN MODE -  Container nginx-golang-mysql-backend-1                                Started                                                                                                                                           0.0s\n ✔ DRY-RUN MODE -  Container nginx-golang-mysql-proxy-1                                  Started                                     Started\n```\nFrom the example above, you can see that the first step is to pull the image defined by `db` service, then build the `backend` service.  \nNext, the containers are created. The `db` service is started, and the `backend` and `proxy` wait until the `db` service is healthy before starting.\n\nDry Run mode works with almost all commands. You cannot use Dry Run mode with a command that doesn't change the state of a Compose stack such as `ps`, `ls`, `logs` for example.\n"
  },
  {
    "path": "docs/reference/compose_alpha.md",
    "content": "# docker compose alpha\n\n<!---MARKER_GEN_START-->\nExperimental commands\n\n### Subcommands\n\n| Name                              | Description                                                                                          |\n|:----------------------------------|:-----------------------------------------------------------------------------------------------------|\n| [`viz`](compose_alpha_viz.md)     | EXPERIMENTAL - Generate a graphviz graph from your compose file                                      |\n| [`watch`](compose_alpha_watch.md) | EXPERIMENTAL - Watch build context for service and rebuild/refresh containers when files are updated |\n\n\n### Options\n\n| Name        | Type | Default | Description                     |\n|:------------|:-----|:--------|:--------------------------------|\n| `--dry-run` |      |         | Execute command in dry run mode |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_alpha_dry-run.md",
    "content": "# docker compose alpha dry-run\n\n<!---MARKER_GEN_START-->\nDry run command allows you to test a command without applying changes\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_alpha_generate.md",
    "content": "# docker compose alpha generate\n\n<!---MARKER_GEN_START-->\nEXPERIMENTAL - Generate a Compose file from existing containers\n\n### Options\n\n| Name            | Type     | Default | Description                               |\n|:----------------|:---------|:--------|:------------------------------------------|\n| `--dry-run`     | `bool`   |         | Execute command in dry run mode           |\n| `--format`      | `string` | `yaml`  | Format the output. Values: [yaml \\| json] |\n| `--name`        | `string` |         | Project name to set in the Compose file   |\n| `--project-dir` | `string` |         | Directory to use for the project          |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_alpha_publish.md",
    "content": "# docker compose alpha publish\n\n<!---MARKER_GEN_START-->\nPublish compose application\n\n### Options\n\n| Name                      | Type     | Default | Description                                                                    |\n|:--------------------------|:---------|:--------|:-------------------------------------------------------------------------------|\n| `--dry-run`               | `bool`   |         | Execute command in dry run mode                                                |\n| `--oci-version`           | `string` |         | OCI image/artifact specification version (automatically determined by default) |\n| `--resolve-image-digests` | `bool`   |         | Pin image tags to digests                                                      |\n| `--with-env`              | `bool`   |         | Include environment variables in the published OCI artifact                    |\n| `-y`, `--yes`             | `bool`   |         | Assume \"yes\" as answer to all prompts                                          |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_alpha_scale.md",
    "content": "# docker compose alpha scale\n\n<!---MARKER_GEN_START-->\nScale services\n\n### Options\n\n| Name        | Type | Default | Description                     |\n|:------------|:-----|:--------|:--------------------------------|\n| `--dry-run` |      |         | Execute command in dry run mode |\n| `--no-deps` |      |         | Don't start linked services     |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_alpha_viz.md",
    "content": "# docker compose alpha viz\n\n<!---MARKER_GEN_START-->\nEXPERIMENTAL - Generate a graphviz graph from your compose file\n\n### Options\n\n| Name                 | Type   | Default | Description                                                                                        |\n|:---------------------|:-------|:--------|:---------------------------------------------------------------------------------------------------|\n| `--dry-run`          | `bool` |         | Execute command in dry run mode                                                                    |\n| `--image`            | `bool` |         | Include service's image name in output graph                                                       |\n| `--indentation-size` | `int`  | `1`     | Number of tabs or spaces to use for indentation                                                    |\n| `--networks`         | `bool` |         | Include service's attached networks in output graph                                                |\n| `--ports`            | `bool` |         | Include service's exposed ports in output graph                                                    |\n| `--spaces`           | `bool` |         | If given, space character ' ' will be used to indent,<br>otherwise tab character '\\t' will be used |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_alpha_watch.md",
    "content": "# docker compose alpha watch\n\n<!---MARKER_GEN_START-->\nWatch build context for service and rebuild/refresh containers when files are updated\n\n### Options\n\n| Name        | Type | Default | Description                                   |\n|:------------|:-----|:--------|:----------------------------------------------|\n| `--dry-run` |      |         | Execute command in dry run mode               |\n| `--no-up`   |      |         | Do not build & start services before watching |\n| `--quiet`   |      |         | hide build output                             |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_attach.md",
    "content": "# docker compose attach\n\n<!---MARKER_GEN_START-->\nAttach local standard input, output, and error streams to a service's running container\n\n### Options\n\n| Name            | Type     | Default | Description                                               |\n|:----------------|:---------|:--------|:----------------------------------------------------------|\n| `--detach-keys` | `string` |         | Override the key sequence for detaching from a container. |\n| `--dry-run`     | `bool`   |         | Execute command in dry run mode                           |\n| `--index`       | `int`    | `0`     | index of the container if service has multiple replicas.  |\n| `--no-stdin`    | `bool`   |         | Do not attach STDIN                                       |\n| `--sig-proxy`   | `bool`   | `true`  | Proxy all received signals to the process                 |\n\n\n<!---MARKER_GEN_END-->\n"
  },
  {
    "path": "docs/reference/compose_bridge.md",
    "content": "# docker compose bridge\n\n<!---MARKER_GEN_START-->\nConvert compose files into another model\n\n### Subcommands\n\n| Name                                                   | Description                                                                  |\n|:-------------------------------------------------------|:-----------------------------------------------------------------------------|\n| [`convert`](compose_bridge_convert.md)                 | Convert compose files to Kubernetes manifests, Helm charts, or another model |\n| [`transformations`](compose_bridge_transformations.md) | Manage transformation images                                                 |\n\n\n### Options\n\n| Name        | Type   | Default | Description                     |\n|:------------|:-------|:--------|:--------------------------------|\n| `--dry-run` | `bool` |         | Execute command in dry run mode |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_bridge_convert.md",
    "content": "# docker compose bridge convert\n\n<!---MARKER_GEN_START-->\nConvert compose files to Kubernetes manifests, Helm charts, or another model\n\n### Options\n\n| Name                     | Type          | Default | Description                                                                          |\n|:-------------------------|:--------------|:--------|:-------------------------------------------------------------------------------------|\n| `--dry-run`              | `bool`        |         | Execute command in dry run mode                                                      |\n| `-o`, `--output`         | `string`      | `out`   | The output directory for the Kubernetes resources                                    |\n| `--templates`            | `string`      |         | Directory containing transformation templates                                        |\n| `-t`, `--transformation` | `stringArray` |         | Transformation to apply to compose model (default: docker/compose-bridge-kubernetes) |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_bridge_transformations.md",
    "content": "# docker compose bridge transformations\n\n<!---MARKER_GEN_START-->\nManage transformation images\n\n### Subcommands\n\n| Name                                                 | Description                    |\n|:-----------------------------------------------------|:-------------------------------|\n| [`create`](compose_bridge_transformations_create.md) | Create a new transformation    |\n| [`list`](compose_bridge_transformations_list.md)     | List available transformations |\n\n\n### Options\n\n| Name        | Type   | Default | Description                     |\n|:------------|:-------|:--------|:--------------------------------|\n| `--dry-run` | `bool` |         | Execute command in dry run mode |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_bridge_transformations_create.md",
    "content": "# docker compose bridge transformations create\n\n<!---MARKER_GEN_START-->\nCreate a new transformation\n\n### Options\n\n| Name           | Type     | Default | Description                                                                 |\n|:---------------|:---------|:--------|:----------------------------------------------------------------------------|\n| `--dry-run`    | `bool`   |         | Execute command in dry run mode                                             |\n| `-f`, `--from` | `string` |         | Existing transformation to copy (default: docker/compose-bridge-kubernetes) |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_bridge_transformations_list.md",
    "content": "# docker compose bridge transformations list\n\n<!---MARKER_GEN_START-->\nList available transformations\n\n### Aliases\n\n`docker compose bridge transformations list`, `docker compose bridge transformations ls`\n\n### Options\n\n| Name            | Type     | Default | Description                                |\n|:----------------|:---------|:--------|:-------------------------------------------|\n| `--dry-run`     | `bool`   |         | Execute command in dry run mode            |\n| `--format`      | `string` | `table` | Format the output. Values: [table \\| json] |\n| `-q`, `--quiet` | `bool`   |         | Only display transformer names             |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_build.md",
    "content": "# docker compose build\n\n<!---MARKER_GEN_START-->\nServices are built once and then tagged, by default as `project-service`.\n\nIf the Compose file specifies an\n[image](https://github.com/compose-spec/compose-spec/blob/main/spec.md#image) name,\nthe image is tagged with that name, substituting any variables beforehand. See\n[variable interpolation](https://github.com/compose-spec/compose-spec/blob/main/spec.md#interpolation).\n\nIf you change a service's `Dockerfile` or the contents of its build directory,\nrun `docker compose build` to rebuild it.\n\n### Options\n\n| Name                  | Type          | Default | Description                                                                                                 |\n|:----------------------|:--------------|:--------|:------------------------------------------------------------------------------------------------------------|\n| `--build-arg`         | `stringArray` |         | Set build-time variables for services                                                                       |\n| `--builder`           | `string`      |         | Set builder to use                                                                                          |\n| `--check`             | `bool`        |         | Check build configuration                                                                                   |\n| `--dry-run`           | `bool`        |         | Execute command in dry run mode                                                                             |\n| `-m`, `--memory`      | `bytes`       | `0`     | Set memory limit for the build container. Not supported by BuildKit.                                        |\n| `--no-cache`          | `bool`        |         | Do not use cache when building the image                                                                    |\n| `--print`             | `bool`        |         | Print equivalent bake file                                                                                  |\n| `--provenance`        | `string`      |         | Add a provenance attestation                                                                                |\n| `--pull`              | `bool`        |         | Always attempt to pull a newer version of the image                                                         |\n| `--push`              | `bool`        |         | Push service images                                                                                         |\n| `-q`, `--quiet`       | `bool`        |         | Suppress the build output                                                                                   |\n| `--sbom`              | `string`      |         | Add a SBOM attestation                                                                                      |\n| `--ssh`               | `string`      |         | Set SSH authentications used when building service images. (use 'default' for using your default SSH Agent) |\n| `--with-dependencies` | `bool`        |         | Also build dependencies (transitively)                                                                      |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nServices are built once and then tagged, by default as `project-service`.\n\nIf the Compose file specifies an\n[image](https://github.com/compose-spec/compose-spec/blob/main/spec.md#image) name,\nthe image is tagged with that name, substituting any variables beforehand. See\n[variable interpolation](https://github.com/compose-spec/compose-spec/blob/main/spec.md#interpolation).\n\nIf you change a service's `Dockerfile` or the contents of its build directory,\nrun `docker compose build` to rebuild it.\n"
  },
  {
    "path": "docs/reference/compose_commit.md",
    "content": "# docker compose commit\n\n<!---MARKER_GEN_START-->\nCreate a new image from a service container's changes\n\n### Options\n\n| Name              | Type     | Default | Description                                                |\n|:------------------|:---------|:--------|:-----------------------------------------------------------|\n| `-a`, `--author`  | `string` |         | Author (e.g., \"John Hannibal Smith <hannibal@a-team.com>\") |\n| `-c`, `--change`  | `list`   |         | Apply Dockerfile instruction to the created image          |\n| `--dry-run`       | `bool`   |         | Execute command in dry run mode                            |\n| `--index`         | `int`    | `0`     | index of the container if service has multiple replicas.   |\n| `-m`, `--message` | `string` |         | Commit message                                             |\n| `-p`, `--pause`   | `bool`   | `true`  | Pause container during commit                              |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_config.md",
    "content": "# docker compose convert\n\n<!---MARKER_GEN_START-->\n`docker compose config` renders the actual data model to be applied on the Docker Engine.\nIt merges the Compose files set by `-f` flags, resolves variables in the Compose file, and expands short-notation into\nthe canonical format.\n\n### Options\n\n| Name                      | Type     | Default | Description                                                                 |\n|:--------------------------|:---------|:--------|:----------------------------------------------------------------------------|\n| `--dry-run`               | `bool`   |         | Execute command in dry run mode                                             |\n| `--environment`           | `bool`   |         | Print environment used for interpolation.                                   |\n| `--format`                | `string` |         | Format the output. Values: [yaml \\| json]                                   |\n| `--hash`                  | `string` |         | Print the service config hash, one per line.                                |\n| `--images`                | `bool`   |         | Print the image names, one per line.                                        |\n| `--lock-image-digests`    | `bool`   |         | Produces an override file with image digests                                |\n| `--models`                | `bool`   |         | Print the model names, one per line.                                        |\n| `--networks`              | `bool`   |         | Print the network names, one per line.                                      |\n| `--no-consistency`        | `bool`   |         | Don't check model consistency - warning: may produce invalid Compose output |\n| `--no-env-resolution`     | `bool`   |         | Don't resolve service env files                                             |\n| `--no-interpolate`        | `bool`   |         | Don't interpolate environment variables                                     |\n| `--no-normalize`          | `bool`   |         | Don't normalize compose model                                               |\n| `--no-path-resolution`    | `bool`   |         | Don't resolve file paths                                                    |\n| `-o`, `--output`          | `string` |         | Save to file (default to stdout)                                            |\n| `--profiles`              | `bool`   |         | Print the profile names, one per line.                                      |\n| `-q`, `--quiet`           | `bool`   |         | Only validate the configuration, don't print anything                       |\n| `--resolve-image-digests` | `bool`   |         | Pin image tags to digests                                                   |\n| `--services`              | `bool`   |         | Print the service names, one per line.                                      |\n| `--variables`             | `bool`   |         | Print model variables and default values.                                   |\n| `--volumes`               | `bool`   |         | Print the volume names, one per line.                                       |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\n`docker compose config` renders the actual data model to be applied on the Docker Engine.\nIt merges the Compose files set by `-f` flags, resolves variables in the Compose file, and expands short-notation into\nthe canonical format.\n"
  },
  {
    "path": "docs/reference/compose_cp.md",
    "content": "# docker compose cp\n\n<!---MARKER_GEN_START-->\nCopy files/folders between a service container and the local filesystem\n\n### Options\n\n| Name                  | Type   | Default | Description                                             |\n|:----------------------|:-------|:--------|:--------------------------------------------------------|\n| `--all`               | `bool` |         | Include containers created by the run command           |\n| `-a`, `--archive`     | `bool` |         | Archive mode (copy all uid/gid information)             |\n| `--dry-run`           | `bool` |         | Execute command in dry run mode                         |\n| `-L`, `--follow-link` | `bool` |         | Always follow symbol link in SRC_PATH                   |\n| `--index`             | `int`  | `0`     | Index of the container if service has multiple replicas |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_create.md",
    "content": "# docker compose create\n\n<!---MARKER_GEN_START-->\nCreates containers for a service\n\n### Options\n\n| Name               | Type          | Default  | Description                                                                                   |\n|:-------------------|:--------------|:---------|:----------------------------------------------------------------------------------------------|\n| `--build`          | `bool`        |          | Build images before starting containers                                                       |\n| `--dry-run`        | `bool`        |          | Execute command in dry run mode                                                               |\n| `--force-recreate` | `bool`        |          | Recreate containers even if their configuration and image haven't changed                     |\n| `--no-build`       | `bool`        |          | Don't build an image, even if it's policy                                                     |\n| `--no-recreate`    | `bool`        |          | If containers already exist, don't recreate them. Incompatible with --force-recreate.         |\n| `--pull`           | `string`      | `policy` | Pull image before running (\"always\"\\|\"missing\"\\|\"never\"\\|\"build\")                             |\n| `--quiet-pull`     | `bool`        |          | Pull without printing progress information                                                    |\n| `--remove-orphans` | `bool`        |          | Remove containers for services not defined in the Compose file                                |\n| `--scale`          | `stringArray` |          | Scale SERVICE to NUM instances. Overrides the `scale` setting in the Compose file if present. |\n| `-y`, `--yes`      | `bool`        |          | Assume \"yes\" as answer to all prompts and run non-interactively                               |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_down.md",
    "content": "# docker compose down\n\n<!---MARKER_GEN_START-->\nStops containers and removes containers, networks, volumes, and images created by `up`.\n\nBy default, the only things removed are:\n\n- Containers for services defined in the Compose file.\n- Networks defined in the networks section of the Compose file.\n- The default network, if one is used.\n\nNetworks and volumes defined as external are never removed.\n\nAnonymous volumes are not removed by default. However, as they don’t have a stable name, they are not automatically\nmounted by a subsequent `up`. For data that needs to persist between updates, use explicit paths as bind mounts or\nnamed volumes.\n\n### Options\n\n| Name               | Type     | Default | Description                                                                                                             |\n|:-------------------|:---------|:--------|:------------------------------------------------------------------------------------------------------------------------|\n| `--dry-run`        | `bool`   |         | Execute command in dry run mode                                                                                         |\n| `--remove-orphans` | `bool`   |         | Remove containers for services not defined in the Compose file                                                          |\n| `--rmi`            | `string` |         | Remove images used by services. \"local\" remove only images that don't have a custom tag (\"local\"\\|\"all\")                |\n| `-t`, `--timeout`  | `int`    | `0`     | Specify a shutdown timeout in seconds                                                                                   |\n| `-v`, `--volumes`  | `bool`   |         | Remove named volumes declared in the \"volumes\" section of the Compose file and anonymous volumes attached to containers |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nStops containers and removes containers, networks, volumes, and images created by `up`.\n\nBy default, the only things removed are:\n\n- Containers for services defined in the Compose file.\n- Networks defined in the networks section of the Compose file.\n- The default network, if one is used.\n\nNetworks and volumes defined as external are never removed.\n\nAnonymous volumes are not removed by default. However, as they don’t have a stable name, they are not automatically\nmounted by a subsequent `up`. For data that needs to persist between updates, use explicit paths as bind mounts or\nnamed volumes.\n"
  },
  {
    "path": "docs/reference/compose_events.md",
    "content": "# docker compose events\n\n<!---MARKER_GEN_START-->\nStream container events for every container in the project.\n\nWith the `--json` flag, a json object is printed one per line with the format:\n\n```json\n{\n    \"time\": \"2015-11-20T18:01:03.615550\",\n    \"type\": \"container\",\n    \"action\": \"create\",\n    \"id\": \"213cf7...5fc39a\",\n    \"service\": \"web\",\n    \"attributes\": {\n      \"name\": \"application_web_1\",\n      \"image\": \"alpine:edge\"\n    }\n}\n```\n\nThe events that can be received using this can be seen [here](/reference/cli/docker/system/events/#object-types).\n\n### Options\n\n| Name        | Type     | Default | Description                               |\n|:------------|:---------|:--------|:------------------------------------------|\n| `--dry-run` | `bool`   |         | Execute command in dry run mode           |\n| `--json`    | `bool`   |         | Output events as a stream of json objects |\n| `--since`   | `string` |         | Show all events created since timestamp   |\n| `--until`   | `string` |         | Stream events until this timestamp        |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nStream container events for every container in the project.\n\nWith the `--json` flag, a json object is printed one per line with the format:\n\n```json\n{\n    \"time\": \"2015-11-20T18:01:03.615550\",\n    \"type\": \"container\",\n    \"action\": \"create\",\n    \"id\": \"213cf7...5fc39a\",\n    \"service\": \"web\",\n    \"attributes\": {\n      \"name\": \"application_web_1\",\n      \"image\": \"alpine:edge\"\n    }\n}\n```\n\nThe events that can be received using this can be seen [here](https://docs.docker.com/reference/cli/docker/system/events/#object-types).\n"
  },
  {
    "path": "docs/reference/compose_exec.md",
    "content": "# docker compose exec\n\n<!---MARKER_GEN_START-->\nThis is the equivalent of `docker exec` targeting a Compose service.\n\nWith this subcommand, you can run arbitrary commands in your services. Commands allocate a TTY by default, so\nyou can use a command such as `docker compose exec web sh` to get an interactive prompt.\n\nBy default, Compose will enter container in interactive mode and allocate a TTY, while the equivalent `docker exec`\ncommand requires passing `--interactive --tty` flags to get the same behavior. Compose also support those two flags\nto offer a smooth migration between commands, whenever they are no-op by default. Still, `interactive` can be used to\nforce disabling interactive mode (`--interactive=false`), typically when `docker compose exec` command is used inside\na script.\n\n### Options\n\n| Name              | Type          | Default | Description                                                                      |\n|:------------------|:--------------|:--------|:---------------------------------------------------------------------------------|\n| `-d`, `--detach`  | `bool`        |         | Detached mode: Run command in the background                                     |\n| `--dry-run`       | `bool`        |         | Execute command in dry run mode                                                  |\n| `-e`, `--env`     | `stringArray` |         | Set environment variables                                                        |\n| `--index`         | `int`         | `0`     | Index of the container if service has multiple replicas                          |\n| `-T`, `--no-tty`  | `bool`        | `true`  | Disable pseudo-TTY allocation. By default 'docker compose exec' allocates a TTY. |\n| `--privileged`    | `bool`        |         | Give extended privileges to the process                                          |\n| `-u`, `--user`    | `string`      |         | Run the command as this user                                                     |\n| `-w`, `--workdir` | `string`      |         | Path to workdir directory for this command                                       |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nThis is the equivalent of `docker exec` targeting a Compose service.\n\nWith this subcommand, you can run arbitrary commands in your services. Commands allocate a TTY by default, so\nyou can use a command such as `docker compose exec web sh` to get an interactive prompt.\n\nBy default, Compose will enter container in interactive mode and allocate a TTY, while the equivalent `docker exec` \ncommand requires passing `--interactive --tty` flags to get the same behavior. Compose also support those two flags\nto offer a smooth migration between commands, whenever they are no-op by default. Still, `interactive` can be used to \nforce disabling interactive mode (`--interactive=false`), typically when `docker compose exec` command is used inside \na script."
  },
  {
    "path": "docs/reference/compose_export.md",
    "content": "# docker compose export\n\n<!---MARKER_GEN_START-->\nExport a service container's filesystem as a tar archive\n\n### Options\n\n| Name             | Type     | Default | Description                                              |\n|:-----------------|:---------|:--------|:---------------------------------------------------------|\n| `--dry-run`      | `bool`   |         | Execute command in dry run mode                          |\n| `--index`        | `int`    | `0`     | index of the container if service has multiple replicas. |\n| `-o`, `--output` | `string` |         | Write to a file, instead of STDOUT                       |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_images.md",
    "content": "# docker compose images\n\n<!---MARKER_GEN_START-->\nList images used by the created containers\n\n### Options\n\n| Name            | Type     | Default | Description                                |\n|:----------------|:---------|:--------|:-------------------------------------------|\n| `--dry-run`     | `bool`   |         | Execute command in dry run mode            |\n| `--format`      | `string` | `table` | Format the output. Values: [table \\| json] |\n| `-q`, `--quiet` | `bool`   |         | Only display IDs                           |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_kill.md",
    "content": "# docker compose kill\n\n<!---MARKER_GEN_START-->\nForces running containers to stop by sending a `SIGKILL` signal. Optionally the signal can be passed, for example:\n\n```console\n$ docker compose kill -s SIGINT\n```\n\n### Options\n\n| Name               | Type     | Default   | Description                                                    |\n|:-------------------|:---------|:----------|:---------------------------------------------------------------|\n| `--dry-run`        | `bool`   |           | Execute command in dry run mode                                |\n| `--remove-orphans` | `bool`   |           | Remove containers for services not defined in the Compose file |\n| `-s`, `--signal`   | `string` | `SIGKILL` | SIGNAL to send to the container                                |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nForces running containers to stop by sending a `SIGKILL` signal. Optionally the signal can be passed, for example:\n\n```console\n$ docker compose kill -s SIGINT\n```\n"
  },
  {
    "path": "docs/reference/compose_logs.md",
    "content": "# docker compose logs\n\n<!---MARKER_GEN_START-->\nDisplays log output from services\n\n### Options\n\n| Name                 | Type     | Default | Description                                                                                    |\n|:---------------------|:---------|:--------|:-----------------------------------------------------------------------------------------------|\n| `--dry-run`          | `bool`   |         | Execute command in dry run mode                                                                |\n| `-f`, `--follow`     | `bool`   |         | Follow log output                                                                              |\n| `--index`            | `int`    | `0`     | index of the container if service has multiple replicas                                        |\n| `--no-color`         | `bool`   |         | Produce monochrome output                                                                      |\n| `--no-log-prefix`    | `bool`   |         | Don't print prefix in logs                                                                     |\n| `--since`            | `string` |         | Show logs since timestamp (e.g. 2013-01-02T13:23:37Z) or relative (e.g. 42m for 42 minutes)    |\n| `-n`, `--tail`       | `string` | `all`   | Number of lines to show from the end of the logs for each container                            |\n| `-t`, `--timestamps` | `bool`   |         | Show timestamps                                                                                |\n| `--until`            | `string` |         | Show logs before a timestamp (e.g. 2013-01-02T13:23:37Z) or relative (e.g. 42m for 42 minutes) |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nDisplays log output from services\n"
  },
  {
    "path": "docs/reference/compose_ls.md",
    "content": "# docker compose ls\n\n<!---MARKER_GEN_START-->\nLists running Compose projects\n\n### Options\n\n| Name            | Type     | Default | Description                                |\n|:----------------|:---------|:--------|:-------------------------------------------|\n| `-a`, `--all`   | `bool`   |         | Show all stopped Compose projects          |\n| `--dry-run`     | `bool`   |         | Execute command in dry run mode            |\n| `--filter`      | `filter` |         | Filter output based on conditions provided |\n| `--format`      | `string` | `table` | Format the output. Values: [table \\| json] |\n| `-q`, `--quiet` | `bool`   |         | Only display project names                 |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nLists running Compose projects\n"
  },
  {
    "path": "docs/reference/compose_pause.md",
    "content": "# docker compose pause\n\n<!---MARKER_GEN_START-->\nPauses running containers of a service. They can be unpaused with `docker compose unpause`.\n\n### Options\n\n| Name        | Type   | Default | Description                     |\n|:------------|:-------|:--------|:--------------------------------|\n| `--dry-run` | `bool` |         | Execute command in dry run mode |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nPauses running containers of a service. They can be unpaused with `docker compose unpause`."
  },
  {
    "path": "docs/reference/compose_port.md",
    "content": "# docker compose port\n\n<!---MARKER_GEN_START-->\nPrints the public port for a port binding\n\n### Options\n\n| Name         | Type     | Default | Description                                             |\n|:-------------|:---------|:--------|:--------------------------------------------------------|\n| `--dry-run`  | `bool`   |         | Execute command in dry run mode                         |\n| `--index`    | `int`    | `0`     | Index of the container if service has multiple replicas |\n| `--protocol` | `string` | `tcp`   | tcp or udp                                              |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nPrints the public port for a port binding\n"
  },
  {
    "path": "docs/reference/compose_ps.md",
    "content": "# docker compose ps\n\n<!---MARKER_GEN_START-->\nLists containers for a Compose project, with current status and exposed ports.\n\n```console\n$ docker compose ps\nNAME            IMAGE     COMMAND           SERVICE    CREATED         STATUS          PORTS\nexample-foo-1   alpine    \"/entrypoint.…\"   foo        4 seconds ago   Up 2 seconds    0.0.0.0:8080->80/tcp\n```\n\nBy default, only running containers are shown. `--all` flag can be used to include stopped containers.\n\n```console\n$ docker compose ps --all\nNAME            IMAGE     COMMAND           SERVICE    CREATED         STATUS          PORTS\nexample-foo-1   alpine    \"/entrypoint.…\"   foo        4 seconds ago   Up 2 seconds    0.0.0.0:8080->80/tcp\nexample-bar-1   alpine    \"/entrypoint.…\"   bar        4 seconds ago   exited (0)\n```\n\n### Options\n\n| Name                  | Type          | Default | Description                                                                                                                                                                                                                                                                                                                                                                                                                          |\n|:----------------------|:--------------|:--------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `-a`, `--all`         | `bool`        |         | Show all stopped containers (including those created by the run command)                                                                                                                                                                                                                                                                                                                                                             |\n| `--dry-run`           | `bool`        |         | Execute command in dry run mode                                                                                                                                                                                                                                                                                                                                                                                                      |\n| [`--filter`](#filter) | `string`      |         | Filter services by a property (supported filters: status)                                                                                                                                                                                                                                                                                                                                                                            |\n| [`--format`](#format) | `string`      | `table` | Format output using a custom template:<br>'table':            Print output in table format with column headers (default)<br>'table TEMPLATE':   Print output in table format using the given Go template<br>'json':             Print in JSON format<br>'TEMPLATE':         Print output using the given Go template.<br>Refer to https://docs.docker.com/go/formatting/ for more information about formatting output with templates |\n| `--no-trunc`          | `bool`        |         | Don't truncate output                                                                                                                                                                                                                                                                                                                                                                                                                |\n| `--orphans`           | `bool`        | `true`  | Include orphaned services (not declared by project)                                                                                                                                                                                                                                                                                                                                                                                  |\n| `-q`, `--quiet`       | `bool`        |         | Only display IDs                                                                                                                                                                                                                                                                                                                                                                                                                     |\n| `--services`          | `bool`        |         | Display services                                                                                                                                                                                                                                                                                                                                                                                                                     |\n| [`--status`](#status) | `stringArray` |         | Filter services by status. Values: [paused \\| restarting \\| removing \\| running \\| dead \\| created \\| exited]                                                                                                                                                                                                                                                                                                                        |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nLists containers for a Compose project, with current status and exposed ports.\n\n```console\n$ docker compose ps\nNAME            IMAGE     COMMAND           SERVICE    CREATED         STATUS          PORTS\nexample-foo-1   alpine    \"/entrypoint.…\"   foo        4 seconds ago   Up 2 seconds    0.0.0.0:8080->80/tcp\n```\n\nBy default, only running containers are shown. `--all` flag can be used to include stopped containers.\n\n```console\n$ docker compose ps --all\nNAME            IMAGE     COMMAND           SERVICE    CREATED         STATUS          PORTS\nexample-foo-1   alpine    \"/entrypoint.…\"   foo        4 seconds ago   Up 2 seconds    0.0.0.0:8080->80/tcp\nexample-bar-1   alpine    \"/entrypoint.…\"   bar        4 seconds ago   exited (0)\n```\n\n## Examples\n\n### <a name=\"format\"></a> Format the output (--format)\n\nBy default, the `docker compose ps` command uses a table (\"pretty\") format to\nshow the containers. The `--format` flag allows you to specify alternative\npresentations for the output. Currently, supported options are `pretty` (default),\nand `json`, which outputs information about the containers as a JSON array:\n\n```console\n$ docker compose ps --format json\n[{\"ID\":\"1553b0236cf4d2715845f053a4ee97042c4f9a2ef655731ee34f1f7940eaa41a\",\"Name\":\"example-bar-1\",\"Command\":\"/docker-entrypoint.sh nginx -g 'daemon off;'\",\"Project\":\"example\",\"Service\":\"bar\",\"State\":\"exited\",\"Health\":\"\",\"ExitCode\":0,\"Publishers\":null},{\"ID\":\"f02a4efaabb67416e1ff127d51c4b5578634a0ad5743bd65225ff7d1909a3fa0\",\"Name\":\"example-foo-1\",\"Command\":\"/docker-entrypoint.sh nginx -g 'daemon off;'\",\"Project\":\"example\",\"Service\":\"foo\",\"State\":\"running\",\"Health\":\"\",\"ExitCode\":0,\"Publishers\":[{\"URL\":\"0.0.0.0\",\"TargetPort\":80,\"PublishedPort\":8080,\"Protocol\":\"tcp\"}]}]\n```\n\nThe JSON output allows you to use the information in other tools for further\nprocessing, for example, using the [`jq` utility](https://stedolan.github.io/jq/)\nto pretty-print the JSON:\n\n```console\n$ docker compose ps --format json | jq .\n[\n  {\n    \"ID\": \"1553b0236cf4d2715845f053a4ee97042c4f9a2ef655731ee34f1f7940eaa41a\",\n    \"Name\": \"example-bar-1\",\n    \"Command\": \"/docker-entrypoint.sh nginx -g 'daemon off;'\",\n    \"Project\": \"example\",\n    \"Service\": \"bar\",\n    \"State\": \"exited\",\n    \"Health\": \"\",\n    \"ExitCode\": 0,\n    \"Publishers\": null\n  },\n  {\n    \"ID\": \"f02a4efaabb67416e1ff127d51c4b5578634a0ad5743bd65225ff7d1909a3fa0\",\n    \"Name\": \"example-foo-1\",\n    \"Command\": \"/docker-entrypoint.sh nginx -g 'daemon off;'\",\n    \"Project\": \"example\",\n    \"Service\": \"foo\",\n    \"State\": \"running\",\n    \"Health\": \"\",\n    \"ExitCode\": 0,\n    \"Publishers\": [\n      {\n        \"URL\": \"0.0.0.0\",\n        \"TargetPort\": 80,\n        \"PublishedPort\": 8080,\n        \"Protocol\": \"tcp\"\n      }\n    ]\n  }\n]\n```\n\n### <a name=\"status\"></a> Filter containers by status (--status)\n\nUse the `--status` flag to filter the list of containers by status. For example,\nto show only containers that are running or only containers that have exited:\n\n```console\n$ docker compose ps --status=running\nNAME            IMAGE     COMMAND           SERVICE    CREATED         STATUS          PORTS\nexample-foo-1   alpine    \"/entrypoint.…\"   foo        4 seconds ago   Up 2 seconds    0.0.0.0:8080->80/tcp\n\n$ docker compose ps --status=exited\nNAME            IMAGE     COMMAND           SERVICE    CREATED         STATUS          PORTS\nexample-bar-1   alpine    \"/entrypoint.…\"   bar        4 seconds ago   exited (0)\n```\n\n### <a name=\"filter\"></a> Filter containers by status (--filter)\n\nThe [`--status` flag](#status) is a convenient shorthand for the `--filter status=<status>`\nflag. The example below is the equivalent to the example from the previous section,\nthis time using the `--filter` flag:\n\n```console\n$ docker compose ps --filter status=running\nNAME            IMAGE     COMMAND           SERVICE    CREATED         STATUS          PORTS\nexample-foo-1   alpine    \"/entrypoint.…\"   foo        4 seconds ago   Up 2 seconds    0.0.0.0:8080->80/tcp\n```\n\nThe `docker compose ps` command currently only supports the `--filter status=<status>`\noption, but additional filter options may be added in the future.\n"
  },
  {
    "path": "docs/reference/compose_publish.md",
    "content": "# docker compose publish\n\n<!---MARKER_GEN_START-->\nPublish compose application\n\n### Options\n\n| Name                      | Type     | Default | Description                                                                    |\n|:--------------------------|:---------|:--------|:-------------------------------------------------------------------------------|\n| `--app`                   | `bool`   |         | Published compose application (includes referenced images)                     |\n| `--dry-run`               | `bool`   |         | Execute command in dry run mode                                                |\n| `--oci-version`           | `string` |         | OCI image/artifact specification version (automatically determined by default) |\n| `--resolve-image-digests` | `bool`   |         | Pin image tags to digests                                                      |\n| `--with-env`              | `bool`   |         | Include environment variables in the published OCI artifact                    |\n| `-y`, `--yes`             | `bool`   |         | Assume \"yes\" as answer to all prompts                                          |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_pull.md",
    "content": "# docker compose pull\n\n<!---MARKER_GEN_START-->\nPulls an image associated with a service defined in a `compose.yaml` file, but does not start containers based on those images\n\n### Options\n\n| Name                     | Type     | Default | Description                                            |\n|:-------------------------|:---------|:--------|:-------------------------------------------------------|\n| `--dry-run`              | `bool`   |         | Execute command in dry run mode                        |\n| `--ignore-buildable`     | `bool`   |         | Ignore images that can be built                        |\n| `--ignore-pull-failures` | `bool`   |         | Pull what it can and ignores images with pull failures |\n| `--include-deps`         | `bool`   |         | Also pull services declared as dependencies            |\n| `--policy`               | `string` |         | Apply pull policy (\"missing\"\\|\"always\")                |\n| `-q`, `--quiet`          | `bool`   |         | Pull without printing progress information             |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nPulls an image associated with a service defined in a `compose.yaml` file, but does not start containers based on those images\n\n\n## Examples\n\nConsider the following `compose.yaml`:\n\n```yaml\nservices:\n  db:\n    image: postgres\n  web:\n    build: .\n    command: bundle exec rails s -p 3000 -b '0.0.0.0'\n    volumes:\n      - .:/myapp\n    ports:\n      - \"3000:3000\"\n    depends_on:\n      - db\n```\n\nIf you run `docker compose pull ServiceName` in the same directory as the `compose.yaml` file that defines the service,\nDocker pulls the associated image. For example, to call the postgres image configured as the db service in our example,\nyou would run `docker compose pull db`.\n\n```console\n$ docker compose pull db\n[+] Running 1/15\n ⠸ db Pulling                                                             12.4s\n   ⠿ 45b42c59be33 Already exists                                           0.0s\n   ⠹ 40adec129f1a Downloading  3.374MB/4.178MB                             9.3s\n   ⠹ b4c431d00c78 Download complete                                        9.3s\n   ⠹ 2696974e2815 Download complete                                        9.3s\n   ⠹ 564b77596399 Downloading  5.622MB/7.965MB                             9.3s\n   ⠹ 5044045cf6f2 Downloading  216.7kB/391.1kB                             9.3s\n   ⠹ d736e67e6ac3 Waiting                                                  9.3s\n   ⠹ 390c1c9a5ae4 Waiting                                                  9.3s\n   ⠹ c0e62f172284 Waiting                                                  9.3s\n   ⠹ ebcdc659c5bf Waiting                                                  9.3s\n   ⠹ 29be22cb3acc Waiting                                                  9.3s\n   ⠹ f63c47038e66 Waiting                                                  9.3s\n   ⠹ 77a0c198cde5 Waiting                                                  9.3s\n   ⠹ c8752d5b785c Waiting                                                  9.3s\n```\n\n`docker compose pull` tries to pull image for services with a build section. If pull fails, it lets you know this service image must be built. You can skip this by setting `--ignore-buildable` flag.\n"
  },
  {
    "path": "docs/reference/compose_push.md",
    "content": "# docker compose push\n\n<!---MARKER_GEN_START-->\nPushes images for services to their respective registry/repository.\n\nThe following assumptions are made:\n- You are pushing an image you have built locally\n- You have access to the build key\n\nExamples\n\n```yaml\nservices:\n  service1:\n    build: .\n    image: localhost:5000/yourimage  ## goes to local registry\n\n  service2:\n    build: .\n    image: your-dockerid/yourimage  ## goes to your repository on Docker Hub\n```\n\n### Options\n\n| Name                     | Type   | Default | Description                                            |\n|:-------------------------|:-------|:--------|:-------------------------------------------------------|\n| `--dry-run`              | `bool` |         | Execute command in dry run mode                        |\n| `--ignore-push-failures` | `bool` |         | Push what it can and ignores images with push failures |\n| `--include-deps`         | `bool` |         | Also push images of services declared as dependencies  |\n| `-q`, `--quiet`          | `bool` |         | Push without printing progress information             |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nPushes images for services to their respective registry/repository.\n\nThe following assumptions are made:\n- You are pushing an image you have built locally\n- You have access to the build key\n\nExamples\n\n```yaml\nservices:\n  service1:\n    build: .\n    image: localhost:5000/yourimage  ## goes to local registry\n\n  service2:\n    build: .\n    image: your-dockerid/yourimage  ## goes to your repository on Docker Hub\n```\n"
  },
  {
    "path": "docs/reference/compose_restart.md",
    "content": "# docker compose restart\n\n<!---MARKER_GEN_START-->\nRestarts all stopped and running services, or the specified services only.\n\nIf you make changes to your `compose.yml` configuration, these changes are not reflected\nafter running this command. For example, changes to environment variables (which are added\nafter a container is built, but before the container's command is executed) are not updated\nafter restarting.\n\nIf you are looking to configure a service's restart policy, refer to\n[restart](https://github.com/compose-spec/compose-spec/blob/main/spec.md#restart)\nor [restart_policy](https://github.com/compose-spec/compose-spec/blob/main/deploy.md#restart_policy).\n\n### Options\n\n| Name              | Type   | Default | Description                           |\n|:------------------|:-------|:--------|:--------------------------------------|\n| `--dry-run`       | `bool` |         | Execute command in dry run mode       |\n| `--no-deps`       | `bool` |         | Don't restart dependent services      |\n| `-t`, `--timeout` | `int`  | `0`     | Specify a shutdown timeout in seconds |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nRestarts all stopped and running services, or the specified services only.\n\nIf you make changes to your `compose.yml` configuration, these changes are not reflected\nafter running this command. For example, changes to environment variables (which are added\nafter a container is built, but before the container's command is executed) are not updated\nafter restarting.\n\nIf you are looking to configure a service's restart policy, refer to\n[restart](https://github.com/compose-spec/compose-spec/blob/main/spec.md#restart)\nor [restart_policy](https://github.com/compose-spec/compose-spec/blob/main/deploy.md#restart_policy).\n"
  },
  {
    "path": "docs/reference/compose_rm.md",
    "content": "# docker compose rm\n\n<!---MARKER_GEN_START-->\nRemoves stopped service containers.\n\nBy default, anonymous volumes attached to containers are not removed. You can override this with `-v`. To list all\nvolumes, use `docker volume ls`.\n\nAny data which is not in a volume is lost.\n\nRunning the command with no options also removes one-off containers created by `docker compose run`:\n\n```console\n$ docker compose rm\nGoing to remove djangoquickstart_web_run_1\nAre you sure? [yN] y\nRemoving djangoquickstart_web_run_1 ... done\n```\n\n### Options\n\n| Name              | Type   | Default | Description                                         |\n|:------------------|:-------|:--------|:----------------------------------------------------|\n| `--dry-run`       | `bool` |         | Execute command in dry run mode                     |\n| `-f`, `--force`   | `bool` |         | Don't ask to confirm removal                        |\n| `-s`, `--stop`    | `bool` |         | Stop the containers, if required, before removing   |\n| `-v`, `--volumes` | `bool` |         | Remove any anonymous volumes attached to containers |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nRemoves stopped service containers.\n\nBy default, anonymous volumes attached to containers are not removed. You can override this with `-v`. To list all\nvolumes, use `docker volume ls`.\n\nAny data which is not in a volume is lost.\n\nRunning the command with no options also removes one-off containers created by `docker compose run`:\n\n```console\n$ docker compose rm\nGoing to remove djangoquickstart_web_run_1\nAre you sure? [yN] y\nRemoving djangoquickstart_web_run_1 ... done\n```\n"
  },
  {
    "path": "docs/reference/compose_run.md",
    "content": "# docker compose run\n\n<!---MARKER_GEN_START-->\nRuns a one-time command against a service.\n\nThe following command starts the `web` service and runs `bash` as its command:\n\n```console\n$ docker compose run web bash\n```\n\nCommands you use with run start in new containers with configuration defined by that of the service,\nincluding volumes, links, and other details. However, there are two important differences:\n\nFirst, the command passed by `run` overrides the command defined in the service configuration. For example, if the\n`web` service configuration is started with `bash`, then `docker compose run web python app.py` overrides it with\n`python app.py`.\n\nThe second difference is that the `docker compose run` command does not create any of the ports specified in the\nservice configuration. This prevents port collisions with already-open ports. If you do want the service’s ports\nto be created and mapped to the host, specify the `--service-ports`\n\n```console\n$ docker compose run --service-ports web python manage.py shell\n```\n\nAlternatively, manual port mapping can be specified with the `--publish` or `-p` options, just as when using docker run:\n\n```console\n$ docker compose run --publish 8080:80 -p 2022:22 -p 127.0.0.1:2021:21 web python manage.py shell\n```\n\nIf you start a service configured with links, the run command first checks to see if the linked service is running\nand starts the service if it is stopped. Once all the linked services are running, the run executes the command you\npassed it. For example, you could run:\n\n```console\n$ docker compose run db psql -h db -U docker\n```\n\nThis opens an interactive PostgreSQL shell for the linked `db` container.\n\nIf you do not want the run command to start linked containers, use the `--no-deps` flag:\n\n```console\n$ docker compose run --no-deps web python manage.py shell\n```\n\nIf you want to remove the container after running while overriding the container’s restart policy, use the `--rm` flag:\n\n```console\n$ docker compose run --rm web python manage.py db upgrade\n```\n\nThis runs a database upgrade script, and removes the container when finished running, even if a restart policy is\nspecified in the service configuration.\n\n### Options\n\n| Name                    | Type          | Default  | Description                                                                      |\n|:------------------------|:--------------|:---------|:---------------------------------------------------------------------------------|\n| `--build`               | `bool`        |          | Build image before starting container                                            |\n| `--cap-add`             | `list`        |          | Add Linux capabilities                                                           |\n| `--cap-drop`            | `list`        |          | Drop Linux capabilities                                                          |\n| `-d`, `--detach`        | `bool`        |          | Run container in background and print container ID                               |\n| `--dry-run`             | `bool`        |          | Execute command in dry run mode                                                  |\n| `--entrypoint`          | `string`      |          | Override the entrypoint of the image                                             |\n| `-e`, `--env`           | `stringArray` |          | Set environment variables                                                        |\n| `--env-from-file`       | `stringArray` |          | Set environment variables from file                                              |\n| `-i`, `--interactive`   | `bool`        | `true`   | Keep STDIN open even if not attached                                             |\n| `-l`, `--label`         | `stringArray` |          | Add or override a label                                                          |\n| `--name`                | `string`      |          | Assign a name to the container                                                   |\n| `-T`, `--no-TTY`        | `bool`        | `true`   | Disable pseudo-TTY allocation (default: auto-detected)                           |\n| `--no-deps`             | `bool`        |          | Don't start linked services                                                      |\n| `-p`, `--publish`       | `stringArray` |          | Publish a container's port(s) to the host                                        |\n| `--pull`                | `string`      | `policy` | Pull image before running (\"always\"\\|\"missing\"\\|\"never\")                         |\n| `-q`, `--quiet`         | `bool`        |          | Don't print anything to STDOUT                                                   |\n| `--quiet-build`         | `bool`        |          | Suppress progress output from the build process                                  |\n| `--quiet-pull`          | `bool`        |          | Pull without printing progress information                                       |\n| `--remove-orphans`      | `bool`        |          | Remove containers for services not defined in the Compose file                   |\n| `--rm`                  | `bool`        |          | Automatically remove the container when it exits                                 |\n| `-P`, `--service-ports` | `bool`        |          | Run command with all service's ports enabled and mapped to the host              |\n| `--use-aliases`         | `bool`        |          | Use the service's network useAliases in the network(s) the container connects to |\n| `-u`, `--user`          | `string`      |          | Run as specified username or uid                                                 |\n| `-v`, `--volume`        | `stringArray` |          | Bind mount a volume                                                              |\n| `-w`, `--workdir`       | `string`      |          | Working directory inside the container                                           |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nRuns a one-time command against a service.\n\nThe following command starts the `web` service and runs `bash` as its command:\n\n```console\n$ docker compose run web bash\n```\n\nCommands you use with run start in new containers with configuration defined by that of the service,\nincluding volumes, links, and other details. However, there are two important differences:\n\nFirst, the command passed by `run` overrides the command defined in the service configuration. For example, if the\n`web` service configuration is started with `bash`, then `docker compose run web python app.py` overrides it with\n`python app.py`.\n\nThe second difference is that the `docker compose run` command does not create any of the ports specified in the\nservice configuration. This prevents port collisions with already-open ports. If you do want the service’s ports\nto be created and mapped to the host, specify the `--service-ports`\n\n```console\n$ docker compose run --service-ports web python manage.py shell\n```\n\nAlternatively, manual port mapping can be specified with the `--publish` or `-p` options, just as when using docker run:\n\n```console\n$ docker compose run --publish 8080:80 -p 2022:22 -p 127.0.0.1:2021:21 web python manage.py shell\n```\n\nIf you start a service configured with links, the run command first checks to see if the linked service is running\nand starts the service if it is stopped. Once all the linked services are running, the run executes the command you\npassed it. For example, you could run:\n\n```console\n$ docker compose run db psql -h db -U docker\n```\n\nThis opens an interactive PostgreSQL shell for the linked `db` container.\n\nIf you do not want the run command to start linked containers, use the `--no-deps` flag:\n\n```console\n$ docker compose run --no-deps web python manage.py shell\n```\n\nIf you want to remove the container after running while overriding the container’s restart policy, use the `--rm` flag:\n\n```console\n$ docker compose run --rm web python manage.py db upgrade\n```\n\nThis runs a database upgrade script, and removes the container when finished running, even if a restart policy is\nspecified in the service configuration.\n"
  },
  {
    "path": "docs/reference/compose_scale.md",
    "content": "# docker compose scale\n\n<!---MARKER_GEN_START-->\nScale services \n\n### Options\n\n| Name        | Type   | Default | Description                     |\n|:------------|:-------|:--------|:--------------------------------|\n| `--dry-run` | `bool` |         | Execute command in dry run mode |\n| `--no-deps` | `bool` |         | Don't start linked services     |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_start.md",
    "content": "# docker compose start\n\n<!---MARKER_GEN_START-->\nStarts existing containers for a service\n\n### Options\n\n| Name             | Type   | Default | Description                                                                |\n|:-----------------|:-------|:--------|:---------------------------------------------------------------------------|\n| `--dry-run`      | `bool` |         | Execute command in dry run mode                                            |\n| `--wait`         | `bool` |         | Wait for services to be running\\|healthy. Implies detached mode.           |\n| `--wait-timeout` | `int`  | `0`     | Maximum duration in seconds to wait for the project to be running\\|healthy |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nStarts existing containers for a service\n"
  },
  {
    "path": "docs/reference/compose_stats.md",
    "content": "# docker compose stats\n\n<!---MARKER_GEN_START-->\nDisplay a live stream of container(s) resource usage statistics\n\n### Options\n\n| Name          | Type     | Default | Description                                                                                                                                                                                                                                                                                                                                                                                                                                  |\n|:--------------|:---------|:--------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `-a`, `--all` | `bool`   |         | Show all containers (default shows just running)                                                                                                                                                                                                                                                                                                                                                                                             |\n| `--dry-run`   | `bool`   |         | Execute command in dry run mode                                                                                                                                                                                                                                                                                                                                                                                                              |\n| `--format`    | `string` |         | Format output using a custom template:<br>'table':            Print output in table format with column headers (default)<br>'table TEMPLATE':   Print output in table format using the given Go template<br>'json':             Print in JSON format<br>'TEMPLATE':         Print output using the given Go template.<br>Refer to https://docs.docker.com/engine/cli/formatting/ for more information about formatting output with templates |\n| `--no-stream` | `bool`   |         | Disable streaming stats and only pull the first result                                                                                                                                                                                                                                                                                                                                                                                       |\n| `--no-trunc`  | `bool`   |         | Do not truncate output                                                                                                                                                                                                                                                                                                                                                                                                                       |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_stop.md",
    "content": "# docker compose stop\n\n<!---MARKER_GEN_START-->\nStops running containers without removing them. They can be started again with `docker compose start`.\n\n### Options\n\n| Name              | Type   | Default | Description                           |\n|:------------------|:-------|:--------|:--------------------------------------|\n| `--dry-run`       | `bool` |         | Execute command in dry run mode       |\n| `-t`, `--timeout` | `int`  | `0`     | Specify a shutdown timeout in seconds |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nStops running containers without removing them. They can be started again with `docker compose start`.\n"
  },
  {
    "path": "docs/reference/compose_top.md",
    "content": "# docker compose top\n\n<!---MARKER_GEN_START-->\nDisplays the running processes\n\n### Options\n\n| Name        | Type   | Default | Description                     |\n|:------------|:-------|:--------|:--------------------------------|\n| `--dry-run` | `bool` |         | Execute command in dry run mode |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nDisplays the running processes\n\n## Examples\n\n```console\n$ docker compose top\nexample_foo_1\nUID    PID      PPID     C    STIME   TTY   TIME       CMD\nroot   142353   142331   2    15:33   ?     00:00:00   ping localhost -c 5\n```\n"
  },
  {
    "path": "docs/reference/compose_unpause.md",
    "content": "# docker compose unpause\n\n<!---MARKER_GEN_START-->\nUnpauses paused containers of a service\n\n### Options\n\n| Name        | Type   | Default | Description                     |\n|:------------|:-------|:--------|:--------------------------------|\n| `--dry-run` | `bool` |         | Execute command in dry run mode |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nUnpauses paused containers of a service\n"
  },
  {
    "path": "docs/reference/compose_up.md",
    "content": "# docker compose up\n\n<!---MARKER_GEN_START-->\nBuilds, (re)creates, starts, and attaches to containers for a service.\n\nUnless they are already running, this command also starts any linked services.\n\nThe `docker compose up` command aggregates the output of each container (like `docker compose logs --follow` does).\nOne can optionally select a subset of services to attach to using `--attach` flag, or exclude some services using\n`--no-attach` to prevent output to be flooded by some verbose services.\n\nWhen the command exits, all containers are stopped. Running `docker compose up --detach` starts the containers in the\nbackground and leaves them running.\n\nIf there are existing containers for a service, and the service’s configuration or image was changed after the\ncontainer’s creation, `docker compose up` picks up the changes by stopping and recreating the containers\n(preserving mounted volumes). To prevent Compose from picking up changes, use the `--no-recreate` flag.\n\nIf you want to force Compose to stop and recreate all containers, use the `--force-recreate` flag.\n\nIf the process encounters an error, the exit code for this command is `1`.\nIf the process is interrupted using `SIGINT` (ctrl + C) or `SIGTERM`, the containers are stopped, and the exit code is `0`.\n\n### Options\n\n| Name                           | Type          | Default  | Description                                                                                                                                         |\n|:-------------------------------|:--------------|:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------|\n| `--abort-on-container-exit`    | `bool`        |          | Stops all containers if any container was stopped. Incompatible with -d                                                                             |\n| `--abort-on-container-failure` | `bool`        |          | Stops all containers if any container exited with failure. Incompatible with -d                                                                     |\n| `--always-recreate-deps`       | `bool`        |          | Recreate dependent containers. Incompatible with --no-recreate.                                                                                     |\n| `--attach`                     | `stringArray` |          | Restrict attaching to the specified services. Incompatible with --attach-dependencies.                                                              |\n| `--attach-dependencies`        | `bool`        |          | Automatically attach to log output of dependent services                                                                                            |\n| `--build`                      | `bool`        |          | Build images before starting containers                                                                                                             |\n| `-d`, `--detach`               | `bool`        |          | Detached mode: Run containers in the background                                                                                                     |\n| `--dry-run`                    | `bool`        |          | Execute command in dry run mode                                                                                                                     |\n| `--exit-code-from`             | `string`      |          | Return the exit code of the selected service container. Implies --abort-on-container-exit                                                           |\n| `--force-recreate`             | `bool`        |          | Recreate containers even if their configuration and image haven't changed                                                                           |\n| `--menu`                       | `bool`        |          | Enable interactive shortcuts when running attached. Incompatible with --detach. Can also be enable/disable by setting COMPOSE_MENU environment var. |\n| `--no-attach`                  | `stringArray` |          | Do not attach (stream logs) to the specified services                                                                                               |\n| `--no-build`                   | `bool`        |          | Don't build an image, even if it's policy                                                                                                           |\n| `--no-color`                   | `bool`        |          | Produce monochrome output                                                                                                                           |\n| `--no-deps`                    | `bool`        |          | Don't start linked services                                                                                                                         |\n| `--no-log-prefix`              | `bool`        |          | Don't print prefix in logs                                                                                                                          |\n| `--no-recreate`                | `bool`        |          | If containers already exist, don't recreate them. Incompatible with --force-recreate.                                                               |\n| `--no-start`                   | `bool`        |          | Don't start the services after creating them                                                                                                        |\n| `--pull`                       | `string`      | `policy` | Pull image before running (\"always\"\\|\"missing\"\\|\"never\")                                                                                            |\n| `--quiet-build`                | `bool`        |          | Suppress the build output                                                                                                                           |\n| `--quiet-pull`                 | `bool`        |          | Pull without printing progress information                                                                                                          |\n| `--remove-orphans`             | `bool`        |          | Remove containers for services not defined in the Compose file                                                                                      |\n| `-V`, `--renew-anon-volumes`   | `bool`        |          | Recreate anonymous volumes instead of retrieving data from the previous containers                                                                  |\n| `--scale`                      | `stringArray` |          | Scale SERVICE to NUM instances. Overrides the `scale` setting in the Compose file if present.                                                       |\n| `-t`, `--timeout`              | `int`         | `0`      | Use this timeout in seconds for container shutdown when attached or when containers are already running                                             |\n| `--timestamps`                 | `bool`        |          | Show timestamps                                                                                                                                     |\n| `--wait`                       | `bool`        |          | Wait for services to be running\\|healthy. Implies detached mode.                                                                                    |\n| `--wait-timeout`               | `int`         | `0`      | Maximum duration in seconds to wait for the project to be running\\|healthy                                                                          |\n| `-w`, `--watch`                | `bool`        |          | Watch source code and rebuild/refresh containers when files are updated.                                                                            |\n| `-y`, `--yes`                  | `bool`        |          | Assume \"yes\" as answer to all prompts and run non-interactively                                                                                     |\n\n\n<!---MARKER_GEN_END-->\n\n## Description\n\nBuilds, (re)creates, starts, and attaches to containers for a service.\n\nUnless they are already running, this command also starts any linked services.\n\nThe `docker compose up` command aggregates the output of each container (like `docker compose logs --follow` does).\nOne can optionally select a subset of services to attach to using `--attach` flag, or exclude some services using \n`--no-attach` to prevent output to be flooded by some verbose services. \n\nWhen the command exits, all containers are stopped. Running `docker compose up --detach` starts the containers in the\nbackground and leaves them running.\n\nIf there are existing containers for a service, and the service’s configuration or image was changed after the\ncontainer’s creation, `docker compose up` picks up the changes by stopping and recreating the containers\n(preserving mounted volumes). To prevent Compose from picking up changes, use the `--no-recreate` flag.\n\nIf you want to force Compose to stop and recreate all containers, use the `--force-recreate` flag.\n\nIf the process encounters an error, the exit code for this command is `1`.\nIf the process is interrupted using `SIGINT` (ctrl + C) or `SIGTERM`, the containers are stopped, and the exit code is `0`.\n"
  },
  {
    "path": "docs/reference/compose_version.md",
    "content": "# docker compose version\n\n<!---MARKER_GEN_START-->\nShow the Docker Compose version information\n\n### Options\n\n| Name             | Type     | Default | Description                                                    |\n|:-----------------|:---------|:--------|:---------------------------------------------------------------|\n| `--dry-run`      | `bool`   |         | Execute command in dry run mode                                |\n| `-f`, `--format` | `string` |         | Format the output. Values: [pretty \\| json]. (Default: pretty) |\n| `--short`        | `bool`   |         | Shows only Compose's version number                            |\n\n\n<!---MARKER_GEN_END-->\n"
  },
  {
    "path": "docs/reference/compose_volumes.md",
    "content": "# docker compose volumes\n\n<!---MARKER_GEN_START-->\nList volumes\n\n### Options\n\n| Name            | Type     | Default | Description                                                                                                                                                                                                                                                                                                                                                                                                                          |\n|:----------------|:---------|:--------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `--dry-run`     | `bool`   |         | Execute command in dry run mode                                                                                                                                                                                                                                                                                                                                                                                                      |\n| `--format`      | `string` | `table` | Format output using a custom template:<br>'table':            Print output in table format with column headers (default)<br>'table TEMPLATE':   Print output in table format using the given Go template<br>'json':             Print in JSON format<br>'TEMPLATE':         Print output using the given Go template.<br>Refer to https://docs.docker.com/go/formatting/ for more information about formatting output with templates |\n| `-q`, `--quiet` | `bool`   |         | Only display volume names                                                                                                                                                                                                                                                                                                                                                                                                            |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_wait.md",
    "content": "# docker compose wait\n\n<!---MARKER_GEN_START-->\nBlock until containers of all (or specified) services stop.\n\n### Options\n\n| Name             | Type   | Default | Description                                  |\n|:-----------------|:-------|:--------|:---------------------------------------------|\n| `--down-project` | `bool` |         | Drops project when the first container stops |\n| `--dry-run`      | `bool` |         | Execute command in dry run mode              |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/compose_watch.md",
    "content": "# docker compose watch\n\n<!---MARKER_GEN_START-->\nWatch build context for service and rebuild/refresh containers when files are updated\n\n### Options\n\n| Name        | Type   | Default | Description                                   |\n|:------------|:-------|:--------|:----------------------------------------------|\n| `--dry-run` | `bool` |         | Execute command in dry run mode               |\n| `--no-up`   | `bool` |         | Do not build & start services before watching |\n| `--prune`   | `bool` | `true`  | Prune dangling images on rebuild              |\n| `--quiet`   | `bool` |         | hide build output                             |\n\n\n<!---MARKER_GEN_END-->\n\n"
  },
  {
    "path": "docs/reference/docker_compose.yaml",
    "content": "command: docker compose\nshort: Docker Compose\nlong: Define and run multi-container applications with Docker\nusage: docker compose\npname: docker\nplink: docker.yaml\ncname:\n    - docker compose attach\n    - docker compose bridge\n    - docker compose build\n    - docker compose commit\n    - docker compose config\n    - docker compose cp\n    - docker compose create\n    - docker compose down\n    - docker compose events\n    - docker compose exec\n    - docker compose export\n    - docker compose images\n    - docker compose kill\n    - docker compose logs\n    - docker compose ls\n    - docker compose pause\n    - docker compose port\n    - docker compose ps\n    - docker compose publish\n    - docker compose pull\n    - docker compose push\n    - docker compose restart\n    - docker compose rm\n    - docker compose run\n    - docker compose scale\n    - docker compose start\n    - docker compose stats\n    - docker compose stop\n    - docker compose top\n    - docker compose unpause\n    - docker compose up\n    - docker compose version\n    - docker compose volumes\n    - docker compose wait\n    - docker compose watch\nclink:\n    - docker_compose_attach.yaml\n    - docker_compose_bridge.yaml\n    - docker_compose_build.yaml\n    - docker_compose_commit.yaml\n    - docker_compose_config.yaml\n    - docker_compose_cp.yaml\n    - docker_compose_create.yaml\n    - docker_compose_down.yaml\n    - docker_compose_events.yaml\n    - docker_compose_exec.yaml\n    - docker_compose_export.yaml\n    - docker_compose_images.yaml\n    - docker_compose_kill.yaml\n    - docker_compose_logs.yaml\n    - docker_compose_ls.yaml\n    - docker_compose_pause.yaml\n    - docker_compose_port.yaml\n    - docker_compose_ps.yaml\n    - docker_compose_publish.yaml\n    - docker_compose_pull.yaml\n    - docker_compose_push.yaml\n    - docker_compose_restart.yaml\n    - docker_compose_rm.yaml\n    - docker_compose_run.yaml\n    - docker_compose_scale.yaml\n    - docker_compose_start.yaml\n    - docker_compose_stats.yaml\n    - docker_compose_stop.yaml\n    - docker_compose_top.yaml\n    - docker_compose_unpause.yaml\n    - docker_compose_up.yaml\n    - docker_compose_version.yaml\n    - docker_compose_volumes.yaml\n    - docker_compose_wait.yaml\n    - docker_compose_watch.yaml\noptions:\n    - option: all-resources\n      value_type: bool\n      default_value: \"false\"\n      description: Include all resources, even those not used by services\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: ansi\n      value_type: string\n      default_value: auto\n      description: |\n        Control when to print ANSI control characters (\"never\"|\"always\"|\"auto\")\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: compatibility\n      value_type: bool\n      default_value: \"false\"\n      description: Run compose in backward compatibility mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: env-file\n      value_type: stringArray\n      default_value: '[]'\n      description: Specify an alternate environment file\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: file\n      shorthand: f\n      value_type: stringArray\n      default_value: '[]'\n      description: Compose configuration files\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: insecure-registry\n      value_type: stringArray\n      default_value: '[]'\n      description: |\n        Use insecure registry to pull Compose OCI artifacts. Doesn't apply to images\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-ansi\n      value_type: bool\n      default_value: \"false\"\n      description: Do not print ANSI control characters (DEPRECATED)\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: parallel\n      value_type: int\n      default_value: \"-1\"\n      description: Control max parallelism, -1 for unlimited\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: profile\n      value_type: stringArray\n      default_value: '[]'\n      description: Specify a profile to enable\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: progress\n      value_type: string\n      description: Set type of progress output (auto, tty, plain, json, quiet)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: project-directory\n      value_type: string\n      description: |-\n        Specify an alternate working directory\n        (default: the path of the, first specified, Compose file)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: project-name\n      shorthand: p\n      value_type: string\n      description: Project name\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: verbose\n      value_type: bool\n      default_value: \"false\"\n      description: Show more output\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: version\n      shorthand: v\n      value_type: bool\n      default_value: \"false\"\n      description: Show the Docker Compose version information\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: workdir\n      value_type: string\n      description: |-\n        DEPRECATED! USE --project-directory INSTEAD.\n        Specify an alternate working directory\n        (default: the path of the, first specified, Compose file)\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\nexamples: |-\n    ### Use `-f` to specify the name and path of one or more Compose files\n    Use the `-f` flag to specify the location of a Compose [configuration file](/reference/compose-file/).\n\n    #### Specifying multiple Compose files\n    You can supply multiple `-f` configuration files. When you supply multiple files, Compose combines them into a single\n    configuration. Compose builds the configuration in the order you supply the files. Subsequent files override and add\n    to their predecessors.\n\n    For example, consider this command line:\n\n    ```console\n    $ docker compose -f compose.yaml -f compose.admin.yaml run backup_db\n    ```\n\n    The `compose.yaml` file might specify a `webapp` service.\n\n    ```yaml\n    services:\n      webapp:\n        image: examples/web\n        ports:\n          - \"8000:8000\"\n        volumes:\n          - \"/data\"\n    ```\n    If the `compose.admin.yaml` also specifies this same service, any matching fields override the previous file.\n    New values, add to the `webapp` service configuration.\n\n    ```yaml\n    services:\n      webapp:\n        build: .\n        environment:\n          - DEBUG=1\n    ```\n\n    When you use multiple Compose files, all paths in the files are relative to the first configuration file specified\n    with `-f`. You can use the `--project-directory` option to override this base path.\n\n    Use a `-f` with `-` (dash) as the filename to read the configuration from stdin. When stdin is used all paths in the\n    configuration are relative to the current working directory.\n\n    The `-f` flag is optional. If you don’t provide this flag on the command line, Compose traverses the working directory\n    and its parent directories looking for a `compose.yaml` or `docker-compose.yaml` file.\n\n    #### Specifying a path to a single Compose file\n    You can use the `-f` flag to specify a path to a Compose file that is not located in the current directory, either\n    from the command line or by setting up a `COMPOSE_FILE` environment variable in your shell or in an environment file.\n\n    For an example of using the `-f` option at the command line, suppose you are running the Compose Rails sample, and\n    have a `compose.yaml` file in a directory called `sandbox/rails`. You can use a command like `docker compose pull` to\n    get the postgres image for the db service from anywhere by using the `-f` flag as follows:\n\n    ```console\n    $ docker compose -f ~/sandbox/rails/compose.yaml pull db\n    ```\n\n    #### Using an OCI published artifact\n    You can use the `-f` flag with the `oci://` prefix to reference a Compose file that has been published to an OCI registry.\n    This allows you to distribute and version your Compose configurations as OCI artifacts.\n\n    To use a Compose file from an OCI registry:\n\n    ```console\n    $ docker compose -f oci://registry.example.com/my-compose-project:latest up\n    ```\n\n    You can also combine OCI artifacts with local files:\n\n    ```console\n    $ docker compose -f oci://registry.example.com/my-compose-project:v1.0 -f compose.override.yaml up\n    ```\n\n    The OCI artifact must contain a valid Compose file. You can publish Compose files to an OCI registry using the\n    `docker compose publish` command.\n\n    #### Using a git repository\n    You can use the `-f` flag to reference a Compose file from a git repository. Compose supports various git URL formats:\n\n    Using HTTPS:\n    ```console\n    $ docker compose -f https://github.com/user/repo.git up\n    ```\n\n    Using SSH:\n    ```console\n    $ docker compose -f git@github.com:user/repo.git up\n    ```\n\n    You can specify a specific branch, tag, or commit:\n    ```console\n    $ docker compose -f https://github.com/user/repo.git@main up\n    $ docker compose -f https://github.com/user/repo.git@v1.0.0 up\n    $ docker compose -f https://github.com/user/repo.git@abc123 up\n    ```\n\n    You can also specify a subdirectory within the repository:\n    ```console\n    $ docker compose -f https://github.com/user/repo.git#main:path/to/compose.yaml up\n    ```\n\n    When using git resources, Compose will clone the repository and use the specified Compose file. You can combine\n    git resources with local files:\n\n    ```console\n    $ docker compose -f https://github.com/user/repo.git -f compose.override.yaml up\n    ```\n\n    ### Use `-p` to specify a project name\n\n    Each configuration has a project name. Compose sets the project name using\n    the following mechanisms, in order of precedence:\n    - The `-p` command line flag\n    - The `COMPOSE_PROJECT_NAME` environment variable\n    - The top level `name:` variable from the config file (or the last `name:`\n    from a series of config files specified using `-f`)\n    - The `basename` of the project directory containing the config file (or\n    containing the first config file specified using `-f`)\n    - The `basename` of the current directory if no config file is specified\n    Project names must contain only lowercase letters, decimal digits, dashes,\n    and underscores, and must begin with a lowercase letter or decimal digit. If\n    the `basename` of the project directory or current directory violates this\n    constraint, you must use one of the other mechanisms.\n\n    ```console\n    $ docker compose -p my_project ps -a\n    NAME                 SERVICE    STATUS     PORTS\n    my_project_demo_1    demo       running\n\n    $ docker compose -p my_project logs\n    demo_1  | PING localhost (127.0.0.1): 56 data bytes\n    demo_1  | 64 bytes from 127.0.0.1: seq=0 ttl=64 time=0.095 ms\n    ```\n\n    ### Use profiles to enable optional services\n\n    Use `--profile` to specify one or more active profiles\n    Calling `docker compose --profile frontend up` starts the services with the profile `frontend` and services\n    without any specified profiles.\n    You can also enable multiple profiles, e.g. with `docker compose --profile frontend --profile debug up` the profiles `frontend` and `debug` is enabled.\n\n    Profiles can also be set by `COMPOSE_PROFILES` environment variable.\n\n    ### Configuring parallelism\n\n    Use `--parallel` to specify the maximum level of parallelism for concurrent engine calls.\n    Calling `docker compose --parallel 1 pull` pulls the pullable images defined in the Compose file\n    one at a time. This can also be used to control build concurrency.\n\n    Parallelism can also be set by the `COMPOSE_PARALLEL_LIMIT` environment variable.\n\n    ### Set up environment variables\n\n    You can set environment variables for various docker compose options, including the `-f`, `-p` and `--profiles` flags.\n\n    Setting the `COMPOSE_FILE` environment variable is equivalent to passing the `-f` flag,\n    `COMPOSE_PROJECT_NAME` environment variable does the same as the `-p` flag,\n    `COMPOSE_PROFILES` environment variable is equivalent to the `--profiles` flag\n    and `COMPOSE_PARALLEL_LIMIT` does the same as the `--parallel` flag.\n\n    If flags are explicitly set on the command line, the associated environment variable is ignored.\n\n    Setting the `COMPOSE_IGNORE_ORPHANS` environment variable to `true` stops docker compose from detecting orphaned\n    containers for the project.\n\n    Setting the `COMPOSE_MENU` environment variable to `false` disables the helper menu when running `docker compose up`\n    in attached mode. Alternatively, you can also run `docker compose up --menu=false` to disable the helper menu.\n\n    ### Use Dry Run mode to test your command\n\n    Use `--dry-run` flag to test a command without changing your application stack state.\n    Dry Run mode shows you all the steps Compose applies when executing a command, for example:\n    ```console\n    $ docker compose --dry-run up --build -d\n    [+] Pulling 1/1\n     ✔ DRY-RUN MODE -  db Pulled                                                                                                                                                                                                               0.9s\n    [+] Running 10/8\n     ✔ DRY-RUN MODE -    build service backend                                                                                                                                                                                                 0.0s\n     ✔ DRY-RUN MODE -  ==> ==> writing image dryRun-754a08ddf8bcb1cf22f310f09206dd783d42f7dd                                                                                                                                                   0.0s\n     ✔ DRY-RUN MODE -  ==> ==> naming to nginx-golang-mysql-backend                                                                                                                                                                            0.0s\n     ✔ DRY-RUN MODE -  Network nginx-golang-mysql_default                                    Created                                                                                                                                           0.0s\n     ✔ DRY-RUN MODE -  Container nginx-golang-mysql-db-1                                     Created                                                                                                                                           0.0s\n     ✔ DRY-RUN MODE -  Container nginx-golang-mysql-backend-1                                Created                                                                                                                                           0.0s\n     ✔ DRY-RUN MODE -  Container nginx-golang-mysql-proxy-1                                  Created                                                                                                                                           0.0s\n     ✔ DRY-RUN MODE -  Container nginx-golang-mysql-db-1                                     Healthy                                                                                                                                           0.5s\n     ✔ DRY-RUN MODE -  Container nginx-golang-mysql-backend-1                                Started                                                                                                                                           0.0s\n     ✔ DRY-RUN MODE -  Container nginx-golang-mysql-proxy-1                                  Started                                     Started\n    ```\n    From the example above, you can see that the first step is to pull the image defined by `db` service, then build the `backend` service.\n    Next, the containers are created. The `db` service is started, and the `backend` and `proxy` wait until the `db` service is healthy before starting.\n\n    Dry Run mode works with almost all commands. You cannot use Dry Run mode with a command that doesn't change the state of a Compose stack such as `ps`, `ls`, `logs` for example.\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_alpha.yaml",
    "content": "command: docker compose alpha\nshort: Experimental commands\nlong: Experimental commands\npname: docker compose\nplink: docker_compose.yaml\ncname:\n    - docker compose alpha generate\n    - docker compose alpha publish\n    - docker compose alpha viz\nclink:\n    - docker_compose_alpha_generate.yaml\n    - docker_compose_alpha_publish.yaml\n    - docker_compose_alpha_viz.yaml\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: true\nexperimental: false\nexperimentalcli: true\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_alpha_dry-run.yaml",
    "content": "command: docker compose alpha dry-run\nshort: |\n    EXPERIMENTAL - Dry run command allow you to test a command without applying changes\nlong: |\n    EXPERIMENTAL - Dry run command allow you to test a command without applying changes\nusage: docker compose alpha dry-run -- [COMMAND...]\npname: docker compose alpha\nplink: docker_compose_alpha.yaml\ndeprecated: false\nexperimental: false\nexperimentalcli: true\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_alpha_generate.yaml",
    "content": "command: docker compose alpha generate\nshort: EXPERIMENTAL - Generate a Compose file from existing containers\nlong: EXPERIMENTAL - Generate a Compose file from existing containers\nusage: docker compose alpha generate [OPTIONS] [CONTAINERS...]\npname: docker compose alpha\nplink: docker_compose_alpha.yaml\noptions:\n    - option: format\n      value_type: string\n      default_value: yaml\n      description: 'Format the output. Values: [yaml | json]'\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: name\n      value_type: string\n      description: Project name to set in the Compose file\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: project-dir\n      value_type: string\n      description: Directory to use for the project\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: true\nexperimental: false\nexperimentalcli: true\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_alpha_publish.yaml",
    "content": "command: docker compose alpha publish\nshort: Publish compose application\nlong: Publish compose application\nusage: docker compose alpha publish [OPTIONS] REPOSITORY[:TAG]\npname: docker compose alpha\nplink: docker_compose_alpha.yaml\noptions:\n    - option: app\n      value_type: bool\n      default_value: \"false\"\n      description: Published compose application (includes referenced images)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: insecure-registry\n      value_type: bool\n      default_value: \"false\"\n      description: Use insecure registry\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: oci-version\n      value_type: string\n      description: |\n        OCI image/artifact specification version (automatically determined by default)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: resolve-image-digests\n      value_type: bool\n      default_value: \"false\"\n      description: Pin image tags to digests\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: with-env\n      value_type: bool\n      default_value: \"false\"\n      description: Include environment variables in the published OCI artifact\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: \"yes\"\n      shorthand: \"y\"\n      value_type: bool\n      default_value: \"false\"\n      description: Assume \"yes\" as answer to all prompts\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: true\nexperimental: false\nexperimentalcli: true\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_alpha_scale.yaml",
    "content": "command: docker compose alpha scale\nshort: Scale services\nlong: Scale services\nusage: docker compose alpha scale [SERVICE=REPLICAS...]\npname: docker compose alpha\nplink: docker_compose_alpha.yaml\noptions:\n    - option: no-deps\n      value_type: bool\n      default_value: \"false\"\n      description: Don't start linked services.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: true\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_alpha_viz.yaml",
    "content": "command: docker compose alpha viz\nshort: EXPERIMENTAL - Generate a graphviz graph from your compose file\nlong: EXPERIMENTAL - Generate a graphviz graph from your compose file\nusage: docker compose alpha viz [OPTIONS]\npname: docker compose alpha\nplink: docker_compose_alpha.yaml\noptions:\n    - option: image\n      value_type: bool\n      default_value: \"false\"\n      description: Include service's image name in output graph\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: indentation-size\n      value_type: int\n      default_value: \"1\"\n      description: Number of tabs or spaces to use for indentation\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: networks\n      value_type: bool\n      default_value: \"false\"\n      description: Include service's attached networks in output graph\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: ports\n      value_type: bool\n      default_value: \"false\"\n      description: Include service's exposed ports in output graph\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: spaces\n      value_type: bool\n      default_value: \"false\"\n      description: |-\n        If given, space character ' ' will be used to indent,\n        otherwise tab character '\\t' will be used\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: true\nexperimental: false\nexperimentalcli: true\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_alpha_watch.yaml",
    "content": "command: docker compose alpha watch\nshort: |\n    Watch build context for service and rebuild/refresh containers when files are updated\nlong: |\n    Watch build context for service and rebuild/refresh containers when files are updated\nusage: docker compose alpha watch [SERVICE...]\npname: docker compose alpha\nplink: docker_compose_alpha.yaml\noptions:\n    - option: no-up\n      value_type: bool\n      default_value: \"false\"\n      description: Do not build & start services before watching\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet\n      value_type: bool\n      default_value: \"false\"\n      description: hide build output\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: true\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_attach.yaml",
    "content": "command: docker compose attach\nshort: |\n    Attach local standard input, output, and error streams to a service's running container\nlong: |\n    Attach local standard input, output, and error streams to a service's running container\nusage: docker compose attach [OPTIONS] SERVICE\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: detach-keys\n      value_type: string\n      description: Override the key sequence for detaching from a container.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: index\n      value_type: int\n      default_value: \"0\"\n      description: index of the container if service has multiple replicas.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-stdin\n      value_type: bool\n      default_value: \"false\"\n      description: Do not attach STDIN\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: sig-proxy\n      value_type: bool\n      default_value: \"true\"\n      description: Proxy all received signals to the process\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_bridge.yaml",
    "content": "command: docker compose bridge\nshort: Convert compose files into another model\nlong: Convert compose files into another model\npname: docker compose\nplink: docker_compose.yaml\ncname:\n    - docker compose bridge convert\n    - docker compose bridge transformations\nclink:\n    - docker_compose_bridge_convert.yaml\n    - docker_compose_bridge_transformations.yaml\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_bridge_convert.yaml",
    "content": "command: docker compose bridge convert\nshort: |\n    Convert compose files to Kubernetes manifests, Helm charts, or another model\nlong: |\n    Convert compose files to Kubernetes manifests, Helm charts, or another model\nusage: docker compose bridge convert\npname: docker compose bridge\nplink: docker_compose_bridge.yaml\noptions:\n    - option: output\n      shorthand: o\n      value_type: string\n      default_value: out\n      description: The output directory for the Kubernetes resources\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: templates\n      value_type: string\n      description: Directory containing transformation templates\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: transformation\n      shorthand: t\n      value_type: stringArray\n      default_value: '[]'\n      description: |\n        Transformation to apply to compose model (default: docker/compose-bridge-kubernetes)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_bridge_transformations.yaml",
    "content": "command: docker compose bridge transformations\nshort: Manage transformation images\nlong: Manage transformation images\npname: docker compose bridge\nplink: docker_compose_bridge.yaml\ncname:\n    - docker compose bridge transformations create\n    - docker compose bridge transformations list\nclink:\n    - docker_compose_bridge_transformations_create.yaml\n    - docker_compose_bridge_transformations_list.yaml\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_bridge_transformations_create.yaml",
    "content": "command: docker compose bridge transformations create\nshort: Create a new transformation\nlong: Create a new transformation\nusage: docker compose bridge transformations create [OPTION] PATH\npname: docker compose bridge transformations\nplink: docker_compose_bridge_transformations.yaml\noptions:\n    - option: from\n      shorthand: f\n      value_type: string\n      description: |\n        Existing transformation to copy (default: docker/compose-bridge-kubernetes)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_bridge_transformations_list.yaml",
    "content": "command: docker compose bridge transformations list\naliases: docker compose bridge transformations list, docker compose bridge transformations ls\nshort: List available transformations\nlong: List available transformations\nusage: docker compose bridge transformations list\npname: docker compose bridge transformations\nplink: docker_compose_bridge_transformations.yaml\noptions:\n    - option: format\n      value_type: string\n      default_value: table\n      description: 'Format the output. Values: [table | json]'\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet\n      shorthand: q\n      value_type: bool\n      default_value: \"false\"\n      description: Only display transformer names\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_build.yaml",
    "content": "command: docker compose build\nshort: Build or rebuild services\nlong: |-\n    Services are built once and then tagged, by default as `project-service`.\n\n    If the Compose file specifies an\n    [image](https://github.com/compose-spec/compose-spec/blob/main/spec.md#image) name,\n    the image is tagged with that name, substituting any variables beforehand. See\n    [variable interpolation](https://github.com/compose-spec/compose-spec/blob/main/spec.md#interpolation).\n\n    If you change a service's `Dockerfile` or the contents of its build directory,\n    run `docker compose build` to rebuild it.\nusage: docker compose build [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: build-arg\n      value_type: stringArray\n      default_value: '[]'\n      description: Set build-time variables for services\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: builder\n      value_type: string\n      description: Set builder to use\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: check\n      value_type: bool\n      default_value: \"false\"\n      description: Check build configuration\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: compress\n      value_type: bool\n      default_value: \"true\"\n      description: Compress the build context using gzip. DEPRECATED\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: force-rm\n      value_type: bool\n      default_value: \"true\"\n      description: Always remove intermediate containers. DEPRECATED\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: memory\n      shorthand: m\n      value_type: bytes\n      default_value: \"0\"\n      description: |\n        Set memory limit for the build container. Not supported by BuildKit.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-cache\n      value_type: bool\n      default_value: \"false\"\n      description: Do not use cache when building the image\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-rm\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Do not remove intermediate containers after a successful build. DEPRECATED\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: parallel\n      value_type: bool\n      default_value: \"true\"\n      description: Build images in parallel. DEPRECATED\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: print\n      value_type: bool\n      default_value: \"false\"\n      description: Print equivalent bake file\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: progress\n      value_type: string\n      description: Set type of ui output (auto, tty, plain, json, quiet)\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: provenance\n      value_type: string\n      description: Add a provenance attestation\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: pull\n      value_type: bool\n      default_value: \"false\"\n      description: Always attempt to pull a newer version of the image\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: push\n      value_type: bool\n      default_value: \"false\"\n      description: Push service images\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet\n      shorthand: q\n      value_type: bool\n      default_value: \"false\"\n      description: Suppress the build output\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: sbom\n      value_type: string\n      description: Add a SBOM attestation\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: ssh\n      value_type: string\n      description: |\n        Set SSH authentications used when building service images. (use 'default' for using your default SSH Agent)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: with-dependencies\n      value_type: bool\n      default_value: \"false\"\n      description: Also build dependencies (transitively)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_commit.yaml",
    "content": "command: docker compose commit\nshort: Create a new image from a service container's changes\nlong: Create a new image from a service container's changes\nusage: docker compose commit [OPTIONS] SERVICE [REPOSITORY[:TAG]]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: author\n      shorthand: a\n      value_type: string\n      description: Author (e.g., \"John Hannibal Smith <hannibal@a-team.com>\")\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: change\n      shorthand: c\n      value_type: list\n      description: Apply Dockerfile instruction to the created image\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: index\n      value_type: int\n      default_value: \"0\"\n      description: index of the container if service has multiple replicas.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: message\n      shorthand: m\n      value_type: string\n      description: Commit message\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: pause\n      shorthand: p\n      value_type: bool\n      default_value: \"true\"\n      description: Pause container during commit\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_config.yaml",
    "content": "command: docker compose config\nshort: Parse, resolve and render compose file in canonical format\nlong: |-\n    `docker compose config` renders the actual data model to be applied on the Docker Engine.\n    It merges the Compose files set by `-f` flags, resolves variables in the Compose file, and expands short-notation into\n    the canonical format.\nusage: docker compose config [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: environment\n      value_type: bool\n      default_value: \"false\"\n      description: Print environment used for interpolation.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: format\n      value_type: string\n      description: 'Format the output. Values: [yaml | json]'\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: hash\n      value_type: string\n      description: Print the service config hash, one per line.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: images\n      value_type: bool\n      default_value: \"false\"\n      description: Print the image names, one per line.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: lock-image-digests\n      value_type: bool\n      default_value: \"false\"\n      description: Produces an override file with image digests\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: models\n      value_type: bool\n      default_value: \"false\"\n      description: Print the model names, one per line.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: networks\n      value_type: bool\n      default_value: \"false\"\n      description: Print the network names, one per line.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-consistency\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Don't check model consistency - warning: may produce invalid Compose output\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-env-resolution\n      value_type: bool\n      default_value: \"false\"\n      description: Don't resolve service env files\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-interpolate\n      value_type: bool\n      default_value: \"false\"\n      description: Don't interpolate environment variables\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-normalize\n      value_type: bool\n      default_value: \"false\"\n      description: Don't normalize compose model\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-path-resolution\n      value_type: bool\n      default_value: \"false\"\n      description: Don't resolve file paths\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: output\n      shorthand: o\n      value_type: string\n      description: Save to file (default to stdout)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: profiles\n      value_type: bool\n      default_value: \"false\"\n      description: Print the profile names, one per line.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet\n      shorthand: q\n      value_type: bool\n      default_value: \"false\"\n      description: Only validate the configuration, don't print anything\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: resolve-image-digests\n      value_type: bool\n      default_value: \"false\"\n      description: Pin image tags to digests\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: services\n      value_type: bool\n      default_value: \"false\"\n      description: Print the service names, one per line.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: variables\n      value_type: bool\n      default_value: \"false\"\n      description: Print model variables and default values.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: volumes\n      value_type: bool\n      default_value: \"false\"\n      description: Print the volume names, one per line.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_convert.yaml",
    "content": "command: docker compose convert\naliases: docker compose convert, docker compose config\nshort: Converts the compose file to platform's canonical format\nlong: |-\n    `docker compose convert` renders the actual data model to be applied on the target platform. When used with the Docker engine,\n    it merges the Compose files set by `-f` flags, resolves variables in the Compose file, and expands short-notation into\n    the canonical format.\n\n    To allow smooth migration from docker-compose, this subcommand declares alias `docker compose config`\nusage: docker compose convert [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: format\n      value_type: string\n      default_value: yaml\n      description: 'Format the output. Values: [yaml | json]'\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: hash\n      value_type: string\n      description: Print the service config hash, one per line.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: images\n      value_type: bool\n      default_value: \"false\"\n      description: Print the image names, one per line.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-consistency\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Don't check model consistency - warning: may produce invalid Compose output\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-interpolate\n      value_type: bool\n      default_value: \"false\"\n      description: Don't interpolate environment variables.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-normalize\n      value_type: bool\n      default_value: \"false\"\n      description: Don't normalize compose model.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: output\n      shorthand: o\n      value_type: string\n      description: Save to file (default to stdout)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: profiles\n      value_type: bool\n      default_value: \"false\"\n      description: Print the profile names, one per line.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet\n      shorthand: q\n      value_type: bool\n      default_value: \"false\"\n      description: Only validate the configuration, don't print anything.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: resolve-image-digests\n      value_type: bool\n      default_value: \"false\"\n      description: Pin image tags to digests.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: services\n      value_type: bool\n      default_value: \"false\"\n      description: Print the service names, one per line.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: volumes\n      value_type: bool\n      default_value: \"false\"\n      description: Print the volume names, one per line.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_cp.yaml",
    "content": "command: docker compose cp\nshort: Copy files/folders between a service container and the local filesystem\nlong: Copy files/folders between a service container and the local filesystem\nusage: |-\n    docker compose cp [OPTIONS] SERVICE:SRC_PATH DEST_PATH|-\n    \tdocker compose cp [OPTIONS] SRC_PATH|- SERVICE:DEST_PATH\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: all\n      value_type: bool\n      default_value: \"false\"\n      description: Include containers created by the run command\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: archive\n      shorthand: a\n      value_type: bool\n      default_value: \"false\"\n      description: Archive mode (copy all uid/gid information)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: follow-link\n      shorthand: L\n      value_type: bool\n      default_value: \"false\"\n      description: Always follow symbol link in SRC_PATH\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: index\n      value_type: int\n      default_value: \"0\"\n      description: Index of the container if service has multiple replicas\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_create.yaml",
    "content": "command: docker compose create\nshort: Creates containers for a service\nlong: Creates containers for a service\nusage: docker compose create [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: build\n      value_type: bool\n      default_value: \"false\"\n      description: Build images before starting containers\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: force-recreate\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Recreate containers even if their configuration and image haven't changed\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-build\n      value_type: bool\n      default_value: \"false\"\n      description: Don't build an image, even if it's policy\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-recreate\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        If containers already exist, don't recreate them. Incompatible with --force-recreate.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: pull\n      value_type: string\n      default_value: policy\n      description: Pull image before running (\"always\"|\"missing\"|\"never\"|\"build\")\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet-pull\n      value_type: bool\n      default_value: \"false\"\n      description: Pull without printing progress information\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: remove-orphans\n      value_type: bool\n      default_value: \"false\"\n      description: Remove containers for services not defined in the Compose file\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: scale\n      value_type: stringArray\n      default_value: '[]'\n      description: |\n        Scale SERVICE to NUM instances. Overrides the `scale` setting in the Compose file if present.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: \"yes\"\n      shorthand: \"y\"\n      value_type: bool\n      default_value: \"false\"\n      description: Assume \"yes\" as answer to all prompts and run non-interactively\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_down.yaml",
    "content": "command: docker compose down\nshort: Stop and remove containers, networks\nlong: |-\n    Stops containers and removes containers, networks, volumes, and images created by `up`.\n\n    By default, the only things removed are:\n\n    - Containers for services defined in the Compose file.\n    - Networks defined in the networks section of the Compose file.\n    - The default network, if one is used.\n\n    Networks and volumes defined as external are never removed.\n\n    Anonymous volumes are not removed by default. However, as they don’t have a stable name, they are not automatically\n    mounted by a subsequent `up`. For data that needs to persist between updates, use explicit paths as bind mounts or\n    named volumes.\nusage: docker compose down [OPTIONS] [SERVICES]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: remove-orphans\n      value_type: bool\n      default_value: \"false\"\n      description: Remove containers for services not defined in the Compose file\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: rmi\n      value_type: string\n      description: |\n        Remove images used by services. \"local\" remove only images that don't have a custom tag (\"local\"|\"all\")\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: timeout\n      shorthand: t\n      value_type: int\n      default_value: \"0\"\n      description: Specify a shutdown timeout in seconds\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: volumes\n      shorthand: v\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Remove named volumes declared in the \"volumes\" section of the Compose file and anonymous volumes attached to containers\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_events.yaml",
    "content": "command: docker compose events\nshort: Receive real time events from containers\nlong: |-\n    Stream container events for every container in the project.\n\n    With the `--json` flag, a json object is printed one per line with the format:\n\n    ```json\n    {\n        \"time\": \"2015-11-20T18:01:03.615550\",\n        \"type\": \"container\",\n        \"action\": \"create\",\n        \"id\": \"213cf7...5fc39a\",\n        \"service\": \"web\",\n        \"attributes\": {\n          \"name\": \"application_web_1\",\n          \"image\": \"alpine:edge\"\n        }\n    }\n    ```\n\n    The events that can be received using this can be seen [here](/reference/cli/docker/system/events/#object-types).\nusage: docker compose events [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: json\n      value_type: bool\n      default_value: \"false\"\n      description: Output events as a stream of json objects\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: since\n      value_type: string\n      description: Show all events created since timestamp\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: until\n      value_type: string\n      description: Stream events until this timestamp\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_exec.yaml",
    "content": "command: docker compose exec\nshort: Execute a command in a running container\nlong: |-\n    This is the equivalent of `docker exec` targeting a Compose service.\n\n    With this subcommand, you can run arbitrary commands in your services. Commands allocate a TTY by default, so\n    you can use a command such as `docker compose exec web sh` to get an interactive prompt.\n\n    By default, Compose will enter container in interactive mode and allocate a TTY, while the equivalent `docker exec`\n    command requires passing `--interactive --tty` flags to get the same behavior. Compose also support those two flags\n    to offer a smooth migration between commands, whenever they are no-op by default. Still, `interactive` can be used to\n    force disabling interactive mode (`--interactive=false`), typically when `docker compose exec` command is used inside\n    a script.\nusage: docker compose exec [OPTIONS] SERVICE COMMAND [ARGS...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: detach\n      shorthand: d\n      value_type: bool\n      default_value: \"false\"\n      description: 'Detached mode: Run command in the background'\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: env\n      shorthand: e\n      value_type: stringArray\n      default_value: '[]'\n      description: Set environment variables\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: index\n      value_type: int\n      default_value: \"0\"\n      description: Index of the container if service has multiple replicas\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: interactive\n      shorthand: i\n      value_type: bool\n      default_value: \"true\"\n      description: Keep STDIN open even if not attached\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-tty\n      shorthand: T\n      value_type: bool\n      default_value: \"true\"\n      description: |\n        Disable pseudo-TTY allocation. By default 'docker compose exec' allocates a TTY.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: privileged\n      value_type: bool\n      default_value: \"false\"\n      description: Give extended privileges to the process\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: tty\n      shorthand: t\n      value_type: bool\n      default_value: \"true\"\n      description: Allocate a pseudo-TTY\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: user\n      shorthand: u\n      value_type: string\n      description: Run the command as this user\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: workdir\n      shorthand: w\n      value_type: string\n      description: Path to workdir directory for this command\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_export.yaml",
    "content": "command: docker compose export\nshort: Export a service container's filesystem as a tar archive\nlong: Export a service container's filesystem as a tar archive\nusage: docker compose export [OPTIONS] SERVICE\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: index\n      value_type: int\n      default_value: \"0\"\n      description: index of the container if service has multiple replicas.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: output\n      shorthand: o\n      value_type: string\n      description: Write to a file, instead of STDOUT\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_images.yaml",
    "content": "command: docker compose images\nshort: List images used by the created containers\nlong: List images used by the created containers\nusage: docker compose images [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: format\n      value_type: string\n      default_value: table\n      description: 'Format the output. Values: [table | json]'\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet\n      shorthand: q\n      value_type: bool\n      default_value: \"false\"\n      description: Only display IDs\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_kill.yaml",
    "content": "command: docker compose kill\nshort: Force stop service containers\nlong: |-\n    Forces running containers to stop by sending a `SIGKILL` signal. Optionally the signal can be passed, for example:\n\n    ```console\n    $ docker compose kill -s SIGINT\n    ```\nusage: docker compose kill [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: remove-orphans\n      value_type: bool\n      default_value: \"false\"\n      description: Remove containers for services not defined in the Compose file\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: signal\n      shorthand: s\n      value_type: string\n      default_value: SIGKILL\n      description: SIGNAL to send to the container\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_logs.yaml",
    "content": "command: docker compose logs\nshort: View output from containers\nlong: Displays log output from services\nusage: docker compose logs [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: follow\n      shorthand: f\n      value_type: bool\n      default_value: \"false\"\n      description: Follow log output\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: index\n      value_type: int\n      default_value: \"0\"\n      description: index of the container if service has multiple replicas\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-color\n      value_type: bool\n      default_value: \"false\"\n      description: Produce monochrome output\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-log-prefix\n      value_type: bool\n      default_value: \"false\"\n      description: Don't print prefix in logs\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: since\n      value_type: string\n      description: |\n        Show logs since timestamp (e.g. 2013-01-02T13:23:37Z) or relative (e.g. 42m for 42 minutes)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: tail\n      shorthand: \"n\"\n      value_type: string\n      default_value: all\n      description: |\n        Number of lines to show from the end of the logs for each container\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: timestamps\n      shorthand: t\n      value_type: bool\n      default_value: \"false\"\n      description: Show timestamps\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: until\n      value_type: string\n      description: |\n        Show logs before a timestamp (e.g. 2013-01-02T13:23:37Z) or relative (e.g. 42m for 42 minutes)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_ls.yaml",
    "content": "command: docker compose ls\nshort: List running compose projects\nlong: Lists running Compose projects\nusage: docker compose ls [OPTIONS]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: all\n      shorthand: a\n      value_type: bool\n      default_value: \"false\"\n      description: Show all stopped Compose projects\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: filter\n      value_type: filter\n      description: Filter output based on conditions provided\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: format\n      value_type: string\n      default_value: table\n      description: 'Format the output. Values: [table | json]'\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet\n      shorthand: q\n      value_type: bool\n      default_value: \"false\"\n      description: Only display project names\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_pause.yaml",
    "content": "command: docker compose pause\nshort: Pause services\nlong: |\n    Pauses running containers of a service. They can be unpaused with `docker compose unpause`.\nusage: docker compose pause [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_port.yaml",
    "content": "command: docker compose port\nshort: Print the public port for a port binding\nlong: Prints the public port for a port binding\nusage: docker compose port [OPTIONS] SERVICE PRIVATE_PORT\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: index\n      value_type: int\n      default_value: \"0\"\n      description: Index of the container if service has multiple replicas\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: protocol\n      value_type: string\n      default_value: tcp\n      description: tcp or udp\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_ps.yaml",
    "content": "command: docker compose ps\nshort: List containers\nlong: |-\n    Lists containers for a Compose project, with current status and exposed ports.\n\n    ```console\n    $ docker compose ps\n    NAME            IMAGE     COMMAND           SERVICE    CREATED         STATUS          PORTS\n    example-foo-1   alpine    \"/entrypoint.…\"   foo        4 seconds ago   Up 2 seconds    0.0.0.0:8080->80/tcp\n    ```\n\n    By default, only running containers are shown. `--all` flag can be used to include stopped containers.\n\n    ```console\n    $ docker compose ps --all\n    NAME            IMAGE     COMMAND           SERVICE    CREATED         STATUS          PORTS\n    example-foo-1   alpine    \"/entrypoint.…\"   foo        4 seconds ago   Up 2 seconds    0.0.0.0:8080->80/tcp\n    example-bar-1   alpine    \"/entrypoint.…\"   bar        4 seconds ago   exited (0)\n    ```\nusage: docker compose ps [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: all\n      shorthand: a\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Show all stopped containers (including those created by the run command)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: filter\n      value_type: string\n      description: 'Filter services by a property (supported filters: status)'\n      details_url: '#filter'\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: format\n      value_type: string\n      default_value: table\n      description: |-\n        Format output using a custom template:\n        'table':            Print output in table format with column headers (default)\n        'table TEMPLATE':   Print output in table format using the given Go template\n        'json':             Print in JSON format\n        'TEMPLATE':         Print output using the given Go template.\n        Refer to https://docs.docker.com/go/formatting/ for more information about formatting output with templates\n      details_url: '#format'\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-trunc\n      value_type: bool\n      default_value: \"false\"\n      description: Don't truncate output\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: orphans\n      value_type: bool\n      default_value: \"true\"\n      description: Include orphaned services (not declared by project)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet\n      shorthand: q\n      value_type: bool\n      default_value: \"false\"\n      description: Only display IDs\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: services\n      value_type: bool\n      default_value: \"false\"\n      description: Display services\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: status\n      value_type: stringArray\n      default_value: '[]'\n      description: |\n        Filter services by status. Values: [paused | restarting | removing | running | dead | created | exited]\n      details_url: '#status'\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\nexamples: |-\n    ### Format the output (--format) {#format}\n\n    By default, the `docker compose ps` command uses a table (\"pretty\") format to\n    show the containers. The `--format` flag allows you to specify alternative\n    presentations for the output. Currently, supported options are `pretty` (default),\n    and `json`, which outputs information about the containers as a JSON array:\n\n    ```console\n    $ docker compose ps --format json\n    [{\"ID\":\"1553b0236cf4d2715845f053a4ee97042c4f9a2ef655731ee34f1f7940eaa41a\",\"Name\":\"example-bar-1\",\"Command\":\"/docker-entrypoint.sh nginx -g 'daemon off;'\",\"Project\":\"example\",\"Service\":\"bar\",\"State\":\"exited\",\"Health\":\"\",\"ExitCode\":0,\"Publishers\":null},{\"ID\":\"f02a4efaabb67416e1ff127d51c4b5578634a0ad5743bd65225ff7d1909a3fa0\",\"Name\":\"example-foo-1\",\"Command\":\"/docker-entrypoint.sh nginx -g 'daemon off;'\",\"Project\":\"example\",\"Service\":\"foo\",\"State\":\"running\",\"Health\":\"\",\"ExitCode\":0,\"Publishers\":[{\"URL\":\"0.0.0.0\",\"TargetPort\":80,\"PublishedPort\":8080,\"Protocol\":\"tcp\"}]}]\n    ```\n\n    The JSON output allows you to use the information in other tools for further\n    processing, for example, using the [`jq` utility](https://stedolan.github.io/jq/)\n    to pretty-print the JSON:\n\n    ```console\n    $ docker compose ps --format json | jq .\n    [\n      {\n        \"ID\": \"1553b0236cf4d2715845f053a4ee97042c4f9a2ef655731ee34f1f7940eaa41a\",\n        \"Name\": \"example-bar-1\",\n        \"Command\": \"/docker-entrypoint.sh nginx -g 'daemon off;'\",\n        \"Project\": \"example\",\n        \"Service\": \"bar\",\n        \"State\": \"exited\",\n        \"Health\": \"\",\n        \"ExitCode\": 0,\n        \"Publishers\": null\n      },\n      {\n        \"ID\": \"f02a4efaabb67416e1ff127d51c4b5578634a0ad5743bd65225ff7d1909a3fa0\",\n        \"Name\": \"example-foo-1\",\n        \"Command\": \"/docker-entrypoint.sh nginx -g 'daemon off;'\",\n        \"Project\": \"example\",\n        \"Service\": \"foo\",\n        \"State\": \"running\",\n        \"Health\": \"\",\n        \"ExitCode\": 0,\n        \"Publishers\": [\n          {\n            \"URL\": \"0.0.0.0\",\n            \"TargetPort\": 80,\n            \"PublishedPort\": 8080,\n            \"Protocol\": \"tcp\"\n          }\n        ]\n      }\n    ]\n    ```\n\n    ### Filter containers by status (--status) {#status}\n\n    Use the `--status` flag to filter the list of containers by status. For example,\n    to show only containers that are running or only containers that have exited:\n\n    ```console\n    $ docker compose ps --status=running\n    NAME            IMAGE     COMMAND           SERVICE    CREATED         STATUS          PORTS\n    example-foo-1   alpine    \"/entrypoint.…\"   foo        4 seconds ago   Up 2 seconds    0.0.0.0:8080->80/tcp\n\n    $ docker compose ps --status=exited\n    NAME            IMAGE     COMMAND           SERVICE    CREATED         STATUS          PORTS\n    example-bar-1   alpine    \"/entrypoint.…\"   bar        4 seconds ago   exited (0)\n    ```\n\n    ### Filter containers by status (--filter) {#filter}\n\n    The [`--status` flag](#status) is a convenient shorthand for the `--filter status=<status>`\n    flag. The example below is the equivalent to the example from the previous section,\n    this time using the `--filter` flag:\n\n    ```console\n    $ docker compose ps --filter status=running\n    NAME            IMAGE     COMMAND           SERVICE    CREATED         STATUS          PORTS\n    example-foo-1   alpine    \"/entrypoint.…\"   foo        4 seconds ago   Up 2 seconds    0.0.0.0:8080->80/tcp\n    ```\n\n    The `docker compose ps` command currently only supports the `--filter status=<status>`\n    option, but additional filter options may be added in the future.\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_publish.yaml",
    "content": "command: docker compose publish\nshort: Publish compose application\nlong: Publish compose application\nusage: docker compose publish [OPTIONS] REPOSITORY[:TAG]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: app\n      value_type: bool\n      default_value: \"false\"\n      description: Published compose application (includes referenced images)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: insecure-registry\n      value_type: bool\n      default_value: \"false\"\n      description: Use insecure registry\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: oci-version\n      value_type: string\n      description: |\n        OCI image/artifact specification version (automatically determined by default)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: resolve-image-digests\n      value_type: bool\n      default_value: \"false\"\n      description: Pin image tags to digests\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: with-env\n      value_type: bool\n      default_value: \"false\"\n      description: Include environment variables in the published OCI artifact\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: \"yes\"\n      shorthand: \"y\"\n      value_type: bool\n      default_value: \"false\"\n      description: Assume \"yes\" as answer to all prompts\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_pull.yaml",
    "content": "command: docker compose pull\nshort: Pull service images\nlong: |\n    Pulls an image associated with a service defined in a `compose.yaml` file, but does not start containers based on those images\nusage: docker compose pull [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: ignore-buildable\n      value_type: bool\n      default_value: \"false\"\n      description: Ignore images that can be built\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: ignore-pull-failures\n      value_type: bool\n      default_value: \"false\"\n      description: Pull what it can and ignores images with pull failures\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: include-deps\n      value_type: bool\n      default_value: \"false\"\n      description: Also pull services declared as dependencies\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-parallel\n      value_type: bool\n      default_value: \"true\"\n      description: DEPRECATED disable parallel pulling\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: parallel\n      value_type: bool\n      default_value: \"true\"\n      description: DEPRECATED pull multiple images in parallel\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: policy\n      value_type: string\n      description: Apply pull policy (\"missing\"|\"always\")\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet\n      shorthand: q\n      value_type: bool\n      default_value: \"false\"\n      description: Pull without printing progress information\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\nexamples: |-\n    Consider the following `compose.yaml`:\n\n    ```yaml\n    services:\n      db:\n        image: postgres\n      web:\n        build: .\n        command: bundle exec rails s -p 3000 -b '0.0.0.0'\n        volumes:\n          - .:/myapp\n        ports:\n          - \"3000:3000\"\n        depends_on:\n          - db\n    ```\n\n    If you run `docker compose pull ServiceName` in the same directory as the `compose.yaml` file that defines the service,\n    Docker pulls the associated image. For example, to call the postgres image configured as the db service in our example,\n    you would run `docker compose pull db`.\n\n    ```console\n    $ docker compose pull db\n    [+] Running 1/15\n     ⠸ db Pulling                                                             12.4s\n       ⠿ 45b42c59be33 Already exists                                           0.0s\n       ⠹ 40adec129f1a Downloading  3.374MB/4.178MB                             9.3s\n       ⠹ b4c431d00c78 Download complete                                        9.3s\n       ⠹ 2696974e2815 Download complete                                        9.3s\n       ⠹ 564b77596399 Downloading  5.622MB/7.965MB                             9.3s\n       ⠹ 5044045cf6f2 Downloading  216.7kB/391.1kB                             9.3s\n       ⠹ d736e67e6ac3 Waiting                                                  9.3s\n       ⠹ 390c1c9a5ae4 Waiting                                                  9.3s\n       ⠹ c0e62f172284 Waiting                                                  9.3s\n       ⠹ ebcdc659c5bf Waiting                                                  9.3s\n       ⠹ 29be22cb3acc Waiting                                                  9.3s\n       ⠹ f63c47038e66 Waiting                                                  9.3s\n       ⠹ 77a0c198cde5 Waiting                                                  9.3s\n       ⠹ c8752d5b785c Waiting                                                  9.3s\n    ```\n\n    `docker compose pull` tries to pull image for services with a build section. If pull fails, it lets you know this service image must be built. You can skip this by setting `--ignore-buildable` flag.\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_push.yaml",
    "content": "command: docker compose push\nshort: Push service images\nlong: |-\n    Pushes images for services to their respective registry/repository.\n\n    The following assumptions are made:\n    - You are pushing an image you have built locally\n    - You have access to the build key\n\n    Examples\n\n    ```yaml\n    services:\n      service1:\n        build: .\n        image: localhost:5000/yourimage  ## goes to local registry\n\n      service2:\n        build: .\n        image: your-dockerid/yourimage  ## goes to your repository on Docker Hub\n    ```\nusage: docker compose push [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: ignore-push-failures\n      value_type: bool\n      default_value: \"false\"\n      description: Push what it can and ignores images with push failures\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: include-deps\n      value_type: bool\n      default_value: \"false\"\n      description: Also push images of services declared as dependencies\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet\n      shorthand: q\n      value_type: bool\n      default_value: \"false\"\n      description: Push without printing progress information\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_restart.yaml",
    "content": "command: docker compose restart\nshort: Restart service containers\nlong: |-\n    Restarts all stopped and running services, or the specified services only.\n\n    If you make changes to your `compose.yml` configuration, these changes are not reflected\n    after running this command. For example, changes to environment variables (which are added\n    after a container is built, but before the container's command is executed) are not updated\n    after restarting.\n\n    If you are looking to configure a service's restart policy, refer to\n    [restart](https://github.com/compose-spec/compose-spec/blob/main/spec.md#restart)\n    or [restart_policy](https://github.com/compose-spec/compose-spec/blob/main/deploy.md#restart_policy).\nusage: docker compose restart [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: no-deps\n      value_type: bool\n      default_value: \"false\"\n      description: Don't restart dependent services\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: timeout\n      shorthand: t\n      value_type: int\n      default_value: \"0\"\n      description: Specify a shutdown timeout in seconds\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_rm.yaml",
    "content": "command: docker compose rm\nshort: Removes stopped service containers\nlong: |-\n    Removes stopped service containers.\n\n    By default, anonymous volumes attached to containers are not removed. You can override this with `-v`. To list all\n    volumes, use `docker volume ls`.\n\n    Any data which is not in a volume is lost.\n\n    Running the command with no options also removes one-off containers created by `docker compose run`:\n\n    ```console\n    $ docker compose rm\n    Going to remove djangoquickstart_web_run_1\n    Are you sure? [yN] y\n    Removing djangoquickstart_web_run_1 ... done\n    ```\nusage: docker compose rm [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: all\n      shorthand: a\n      value_type: bool\n      default_value: \"false\"\n      description: Deprecated - no effect\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: force\n      shorthand: f\n      value_type: bool\n      default_value: \"false\"\n      description: Don't ask to confirm removal\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: stop\n      shorthand: s\n      value_type: bool\n      default_value: \"false\"\n      description: Stop the containers, if required, before removing\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: volumes\n      shorthand: v\n      value_type: bool\n      default_value: \"false\"\n      description: Remove any anonymous volumes attached to containers\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_run.yaml",
    "content": "command: docker compose run\nshort: Run a one-off command on a service\nlong: |-\n    Runs a one-time command against a service.\n\n    The following command starts the `web` service and runs `bash` as its command:\n\n    ```console\n    $ docker compose run web bash\n    ```\n\n    Commands you use with run start in new containers with configuration defined by that of the service,\n    including volumes, links, and other details. However, there are two important differences:\n\n    First, the command passed by `run` overrides the command defined in the service configuration. For example, if the\n    `web` service configuration is started with `bash`, then `docker compose run web python app.py` overrides it with\n    `python app.py`.\n\n    The second difference is that the `docker compose run` command does not create any of the ports specified in the\n    service configuration. This prevents port collisions with already-open ports. If you do want the service’s ports\n    to be created and mapped to the host, specify the `--service-ports`\n\n    ```console\n    $ docker compose run --service-ports web python manage.py shell\n    ```\n\n    Alternatively, manual port mapping can be specified with the `--publish` or `-p` options, just as when using docker run:\n\n    ```console\n    $ docker compose run --publish 8080:80 -p 2022:22 -p 127.0.0.1:2021:21 web python manage.py shell\n    ```\n\n    If you start a service configured with links, the run command first checks to see if the linked service is running\n    and starts the service if it is stopped. Once all the linked services are running, the run executes the command you\n    passed it. For example, you could run:\n\n    ```console\n    $ docker compose run db psql -h db -U docker\n    ```\n\n    This opens an interactive PostgreSQL shell for the linked `db` container.\n\n    If you do not want the run command to start linked containers, use the `--no-deps` flag:\n\n    ```console\n    $ docker compose run --no-deps web python manage.py shell\n    ```\n\n    If you want to remove the container after running while overriding the container’s restart policy, use the `--rm` flag:\n\n    ```console\n    $ docker compose run --rm web python manage.py db upgrade\n    ```\n\n    This runs a database upgrade script, and removes the container when finished running, even if a restart policy is\n    specified in the service configuration.\nusage: docker compose run [OPTIONS] SERVICE [COMMAND] [ARGS...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: build\n      value_type: bool\n      default_value: \"false\"\n      description: Build image before starting container\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: cap-add\n      value_type: list\n      description: Add Linux capabilities\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: cap-drop\n      value_type: list\n      description: Drop Linux capabilities\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: detach\n      shorthand: d\n      value_type: bool\n      default_value: \"false\"\n      description: Run container in background and print container ID\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: entrypoint\n      value_type: string\n      description: Override the entrypoint of the image\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: env\n      shorthand: e\n      value_type: stringArray\n      default_value: '[]'\n      description: Set environment variables\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: env-from-file\n      value_type: stringArray\n      default_value: '[]'\n      description: Set environment variables from file\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: interactive\n      shorthand: i\n      value_type: bool\n      default_value: \"true\"\n      description: Keep STDIN open even if not attached\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: label\n      shorthand: l\n      value_type: stringArray\n      default_value: '[]'\n      description: Add or override a label\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: name\n      value_type: string\n      description: Assign a name to the container\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-TTY\n      shorthand: T\n      value_type: bool\n      default_value: \"true\"\n      description: 'Disable pseudo-TTY allocation (default: auto-detected)'\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-deps\n      value_type: bool\n      default_value: \"false\"\n      description: Don't start linked services\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: publish\n      shorthand: p\n      value_type: stringArray\n      default_value: '[]'\n      description: Publish a container's port(s) to the host\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: pull\n      value_type: string\n      default_value: policy\n      description: Pull image before running (\"always\"|\"missing\"|\"never\")\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet\n      shorthand: q\n      value_type: bool\n      default_value: \"false\"\n      description: Don't print anything to STDOUT\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet-build\n      value_type: bool\n      default_value: \"false\"\n      description: Suppress progress output from the build process\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet-pull\n      value_type: bool\n      default_value: \"false\"\n      description: Pull without printing progress information\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: remove-orphans\n      value_type: bool\n      default_value: \"false\"\n      description: Remove containers for services not defined in the Compose file\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: rm\n      value_type: bool\n      default_value: \"false\"\n      description: Automatically remove the container when it exits\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: service-ports\n      shorthand: P\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Run command with all service's ports enabled and mapped to the host\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: tty\n      shorthand: t\n      value_type: bool\n      default_value: \"true\"\n      description: Allocate a pseudo-TTY\n      deprecated: false\n      hidden: true\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: use-aliases\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Use the service's network useAliases in the network(s) the container connects to\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: user\n      shorthand: u\n      value_type: string\n      description: Run as specified username or uid\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: volume\n      shorthand: v\n      value_type: stringArray\n      default_value: '[]'\n      description: Bind mount a volume\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: workdir\n      shorthand: w\n      value_type: string\n      description: Working directory inside the container\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_scale.yaml",
    "content": "command: docker compose scale\nshort: Scale services\nlong: Scale services\nusage: docker compose scale [SERVICE=REPLICAS...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: no-deps\n      value_type: bool\n      default_value: \"false\"\n      description: Don't start linked services\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_start.yaml",
    "content": "command: docker compose start\nshort: Start services\nlong: Starts existing containers for a service\nusage: docker compose start [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: wait\n      value_type: bool\n      default_value: \"false\"\n      description: Wait for services to be running|healthy. Implies detached mode.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: wait-timeout\n      value_type: int\n      default_value: \"0\"\n      description: |\n        Maximum duration in seconds to wait for the project to be running|healthy\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_stats.yaml",
    "content": "command: docker compose stats\nshort: Display a live stream of container(s) resource usage statistics\nlong: Display a live stream of container(s) resource usage statistics\nusage: docker compose stats [OPTIONS] [SERVICE]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: all\n      shorthand: a\n      value_type: bool\n      default_value: \"false\"\n      description: Show all containers (default shows just running)\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: format\n      value_type: string\n      description: |-\n        Format output using a custom template:\n        'table':            Print output in table format with column headers (default)\n        'table TEMPLATE':   Print output in table format using the given Go template\n        'json':             Print in JSON format\n        'TEMPLATE':         Print output using the given Go template.\n        Refer to https://docs.docker.com/engine/cli/formatting/ for more information about formatting output with templates\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-stream\n      value_type: bool\n      default_value: \"false\"\n      description: Disable streaming stats and only pull the first result\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-trunc\n      value_type: bool\n      default_value: \"false\"\n      description: Do not truncate output\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_stop.yaml",
    "content": "command: docker compose stop\nshort: Stop services\nlong: |\n    Stops running containers without removing them. They can be started again with `docker compose start`.\nusage: docker compose stop [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: timeout\n      shorthand: t\n      value_type: int\n      default_value: \"0\"\n      description: Specify a shutdown timeout in seconds\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_top.yaml",
    "content": "command: docker compose top\nshort: Display the running processes\nlong: Displays the running processes\nusage: docker compose top [SERVICES...]\npname: docker compose\nplink: docker_compose.yaml\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\nexamples: |-\n    ```console\n    $ docker compose top\n    example_foo_1\n    UID    PID      PPID     C    STIME   TTY   TIME       CMD\n    root   142353   142331   2    15:33   ?     00:00:00   ping localhost -c 5\n    ```\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_unpause.yaml",
    "content": "command: docker compose unpause\nshort: Unpause services\nlong: Unpauses paused containers of a service\nusage: docker compose unpause [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_up.yaml",
    "content": "command: docker compose up\nshort: Create and start containers\nlong: |-\n    Builds, (re)creates, starts, and attaches to containers for a service.\n\n    Unless they are already running, this command also starts any linked services.\n\n    The `docker compose up` command aggregates the output of each container (like `docker compose logs --follow` does).\n    One can optionally select a subset of services to attach to using `--attach` flag, or exclude some services using\n    `--no-attach` to prevent output to be flooded by some verbose services.\n\n    When the command exits, all containers are stopped. Running `docker compose up --detach` starts the containers in the\n    background and leaves them running.\n\n    If there are existing containers for a service, and the service’s configuration or image was changed after the\n    container’s creation, `docker compose up` picks up the changes by stopping and recreating the containers\n    (preserving mounted volumes). To prevent Compose from picking up changes, use the `--no-recreate` flag.\n\n    If you want to force Compose to stop and recreate all containers, use the `--force-recreate` flag.\n\n    If the process encounters an error, the exit code for this command is `1`.\n    If the process is interrupted using `SIGINT` (ctrl + C) or `SIGTERM`, the containers are stopped, and the exit code is `0`.\nusage: docker compose up [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: abort-on-container-exit\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Stops all containers if any container was stopped. Incompatible with -d\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: abort-on-container-failure\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Stops all containers if any container exited with failure. Incompatible with -d\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: always-recreate-deps\n      value_type: bool\n      default_value: \"false\"\n      description: Recreate dependent containers. Incompatible with --no-recreate.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: attach\n      value_type: stringArray\n      default_value: '[]'\n      description: |\n        Restrict attaching to the specified services. Incompatible with --attach-dependencies.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: attach-dependencies\n      value_type: bool\n      default_value: \"false\"\n      description: Automatically attach to log output of dependent services\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: build\n      value_type: bool\n      default_value: \"false\"\n      description: Build images before starting containers\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: detach\n      shorthand: d\n      value_type: bool\n      default_value: \"false\"\n      description: 'Detached mode: Run containers in the background'\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: exit-code-from\n      value_type: string\n      description: |\n        Return the exit code of the selected service container. Implies --abort-on-container-exit\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: force-recreate\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Recreate containers even if their configuration and image haven't changed\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: menu\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Enable interactive shortcuts when running attached. Incompatible with --detach. Can also be enable/disable by setting COMPOSE_MENU environment var.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-attach\n      value_type: stringArray\n      default_value: '[]'\n      description: Do not attach (stream logs) to the specified services\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-build\n      value_type: bool\n      default_value: \"false\"\n      description: Don't build an image, even if it's policy\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-color\n      value_type: bool\n      default_value: \"false\"\n      description: Produce monochrome output\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-deps\n      value_type: bool\n      default_value: \"false\"\n      description: Don't start linked services\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-log-prefix\n      value_type: bool\n      default_value: \"false\"\n      description: Don't print prefix in logs\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-recreate\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        If containers already exist, don't recreate them. Incompatible with --force-recreate.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: no-start\n      value_type: bool\n      default_value: \"false\"\n      description: Don't start the services after creating them\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: pull\n      value_type: string\n      default_value: policy\n      description: Pull image before running (\"always\"|\"missing\"|\"never\")\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet-build\n      value_type: bool\n      default_value: \"false\"\n      description: Suppress the build output\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet-pull\n      value_type: bool\n      default_value: \"false\"\n      description: Pull without printing progress information\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: remove-orphans\n      value_type: bool\n      default_value: \"false\"\n      description: Remove containers for services not defined in the Compose file\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: renew-anon-volumes\n      shorthand: V\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Recreate anonymous volumes instead of retrieving data from the previous containers\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: scale\n      value_type: stringArray\n      default_value: '[]'\n      description: |\n        Scale SERVICE to NUM instances. Overrides the `scale` setting in the Compose file if present.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: timeout\n      shorthand: t\n      value_type: int\n      default_value: \"0\"\n      description: |\n        Use this timeout in seconds for container shutdown when attached or when containers are already running\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: timestamps\n      value_type: bool\n      default_value: \"false\"\n      description: Show timestamps\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: wait\n      value_type: bool\n      default_value: \"false\"\n      description: Wait for services to be running|healthy. Implies detached mode.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: wait-timeout\n      value_type: int\n      default_value: \"0\"\n      description: |\n        Maximum duration in seconds to wait for the project to be running|healthy\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: watch\n      shorthand: w\n      value_type: bool\n      default_value: \"false\"\n      description: |\n        Watch source code and rebuild/refresh containers when files are updated.\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: \"yes\"\n      shorthand: \"y\"\n      value_type: bool\n      default_value: \"false\"\n      description: Assume \"yes\" as answer to all prompts and run non-interactively\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_version.yaml",
    "content": "command: docker compose version\nshort: Show the Docker Compose version information\nlong: Show the Docker Compose version information\nusage: docker compose version [OPTIONS]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: format\n      shorthand: f\n      value_type: string\n      description: 'Format the output. Values: [pretty | json]. (Default: pretty)'\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: short\n      value_type: bool\n      default_value: \"false\"\n      description: Shows only Compose's version number\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_volumes.yaml",
    "content": "command: docker compose volumes\nshort: List volumes\nlong: List volumes\nusage: docker compose volumes [OPTIONS] [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: format\n      value_type: string\n      default_value: table\n      description: |-\n        Format output using a custom template:\n        'table':            Print output in table format with column headers (default)\n        'table TEMPLATE':   Print output in table format using the given Go template\n        'json':             Print in JSON format\n        'TEMPLATE':         Print output using the given Go template.\n        Refer to https://docs.docker.com/go/formatting/ for more information about formatting output with templates\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet\n      shorthand: q\n      value_type: bool\n      default_value: \"false\"\n      description: Only display volume names\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_wait.yaml",
    "content": "command: docker compose wait\nshort: Block until containers of all (or specified) services stop.\nlong: Block until containers of all (or specified) services stop.\nusage: docker compose wait SERVICE [SERVICE...] [OPTIONS]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: down-project\n      value_type: bool\n      default_value: \"false\"\n      description: Drops project when the first container stops\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/reference/docker_compose_watch.yaml",
    "content": "command: docker compose watch\nshort: |\n    Watch build context for service and rebuild/refresh containers when files are updated\nlong: |\n    Watch build context for service and rebuild/refresh containers when files are updated\nusage: docker compose watch [SERVICE...]\npname: docker compose\nplink: docker_compose.yaml\noptions:\n    - option: no-up\n      value_type: bool\n      default_value: \"false\"\n      description: Do not build & start services before watching\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: prune\n      value_type: bool\n      default_value: \"true\"\n      description: Prune dangling images on rebuild\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\n    - option: quiet\n      value_type: bool\n      default_value: \"false\"\n      description: hide build output\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ninherited_options:\n    - option: dry-run\n      value_type: bool\n      default_value: \"false\"\n      description: Execute command in dry run mode\n      deprecated: false\n      hidden: false\n      experimental: false\n      experimentalcli: false\n      kubernetes: false\n      swarm: false\ndeprecated: false\nhidden: false\nexperimental: false\nexperimentalcli: false\nkubernetes: false\nswarm: false\n\n"
  },
  {
    "path": "docs/sdk.md",
    "content": "# Using the `docker/compose` SDK\n\nThe `docker/compose` package can be used as a Go library by third-party applications to programmatically manage\ncontainerized applications defined in Compose files. This SDK provides a comprehensive API that lets you\nintegrate Compose functionality directly into your applications, allowing you to load, validate, and manage\nmulti-container environments without relying on the Compose CLI. \n\nWhether you need to orchestrate containers as part of\na deployment pipeline, build custom management tools, or embed container orchestration into your application, the\nCompose SDK offers the same powerful capabilities that drive the Docker Compose command-line tool.\n\n## Set up the SDK\n\nTo get started, create an SDK instance using the `NewComposeService()` function, which initializes a service with the\nnecessary configuration to interact with the Docker daemon and manage Compose projects. This service instance provides\nmethods for all core Compose operations including creating, starting, stopping, and removing containers, as well as\nloading and validating Compose files. The service handles the underlying Docker API interactions and resource\nmanagement, allowing you to focus on your application logic.\n\n## Example usage\n\nHere's a basic example demonstrating how to load a Compose project and start the services:\n\n```go\npackage main\n\nimport (\n    \"context\"\n    \"log\"\n\n    \"github.com/docker/cli/cli/command\"\n    \"github.com/docker/cli/cli/flags\"\n    \"github.com/docker/compose/v5/pkg/api\"\n    \"github.com/docker/compose/v5/pkg/compose\"\n)\n\nfunc main() {\n    ctx := context.Background()\n\n    dockerCLI, err := command.NewDockerCli()\n    if err != nil {\n        log.Fatalf(\"Failed to create docker CLI: %v\", err)\n    }\n    err = dockerCLI.Initialize(&flags.ClientOptions{})\n    if err != nil {\n        log.Fatalf(\"Failed to initialize docker CLI: %v\", err)\n    }\n\n    // Create a new Compose service instance\n    service, err := compose.NewComposeService(dockerCLI)\n    if err != nil {\n        log.Fatalf(\"Failed to create compose service: %v\", err)\n    }\n\n    // Load the Compose project from a compose file\n    project, err := service.LoadProject(ctx, api.ProjectLoadOptions{\n        ConfigPaths: []string{\"compose.yaml\"},\n        ProjectName: \"my-app\",\n    })\n    if err != nil {\n        log.Fatalf(\"Failed to load project: %v\", err)\n    }\n\n    // Start the services defined in the Compose file\n    err = service.Up(ctx, project, api.UpOptions{\n        Create: api.CreateOptions{},\n        Start:  api.StartOptions{},\n    })\n    if err != nil {\n        log.Fatalf(\"Failed to start services: %v\", err)\n    }\n\n    log.Printf(\"Successfully started project: %s\", project.Name)\n}\n```\n\nThis example demonstrates the core workflow - creating a service instance, loading a project from a Compose file, and\nstarting the services. The SDK provides many additional operations for managing the lifecycle of your containerized\napplication.\n\n## Customizing the SDK\n\nThe `NewComposeService()` function accepts optional `compose.Option` parameters to customize the SDK behavior. These\noptions allow you to configure I/O streams, concurrency limits, dry-run mode, and other advanced features.\n\n```go\n    // Create a custom output buffer to capture logs\n    var outputBuffer bytes.Buffer\n\n    // Create a compose service with custom options\n    service, err := compose.NewComposeService(dockerCLI,\n        compose.WithOutputStream(&outputBuffer),          // Redirect output to custom writer\n        compose.WithErrorStream(os.Stderr),               // Use stderr for errors\n        compose.WithMaxConcurrency(4),                    // Limit concurrent operations\n        compose.WithPrompt(compose.AlwaysOkPrompt()),     // Auto-confirm all prompts\n    )\n```\n\n### Available options\n\n- `WithOutputStream(io.Writer)` - Redirect standard output to a custom writer\n- `WithErrorStream(io.Writer)` - Redirect error output to a custom writer\n- `WithInputStream(io.Reader)` - Provide a custom input stream for interactive prompts\n- `WithStreams(out, err, in)` - Set all I/O streams at once\n- `WithMaxConcurrency(int)` - Limit the number of concurrent operations against the Docker API\n- `WithPrompt(Prompt)` - Customize user confirmation behavior (use `AlwaysOkPrompt()` for non-interactive mode)\n- `WithDryRun` - Run operations in dry-run mode without actually applying changes\n- `WithContextInfo(api.ContextInfo)` - Set custom Docker context information\n- `WithProxyConfig(map[string]string)` - Configure HTTP proxy settings for builds\n- `WithEventProcessor(progress.EventProcessor)` - Receive progress events and operation notifications\n\nThese options provide fine-grained control over the SDK's behavior, making it suitable for various integration\nscenarios including CLI tools, web services, automation scripts, and testing environments.\n\n## Tracking operations with `EventProcessor`\n\nThe `EventProcessor` interface allows you to monitor Compose operations in real-time by receiving events about changes\napplied to Docker resources such as images, containers, volumes, and networks. This is particularly useful for building\nuser interfaces, logging systems, or monitoring tools that need to track the progress of Compose operations.\n\n### Understanding `EventProcessor`\n\nA Compose operation, such as `up`, `down`, `build`, performs a series of changes to Docker resources. The\n`EventProcessor` receives notifications about these changes through three key methods:\n\n- `Start(ctx, operation)` - Called when a Compose operation begins, for example `up`\n- `On(events...)` - Called with progress events for individual resource changes, for example, container starting, image\n  being pulled\n- `Done(operation, success)` - Called when the operation completes, indicating success or failure\n\nEach event contains information about the resource being modified, its current status, and progress indicators when\napplicable (such as download progress for image pulls).\n\n### Event status types\n\nEvents report resource changes with the following status types:\n\n- Working - Operation is in progress, for example, creating, starting, pulling\n- Done - Operation completed successfully\n- Warning - Operation completed with warnings\n- Error - Operation failed\n\nCommon status text values include: `Creating`, `Created`, `Starting`, `Started`, `Running`, `Stopping`, `Stopped`,\n`Removing`, `Removed`, `Building`, `Built`, `Pulling`, `Pulled`, and more.\n\n### Built-in `EventProcessor` implementations\n\nThe SDK provides three ready-to-use `EventProcessor` implementations:\n\n- `progress.NewTTYWriter(io.Writer)` - Renders an interactive terminal UI with progress bars and task lists\n  (similar to the Docker Compose CLI output)\n- `progress.NewPlainWriter(io.Writer)` - Outputs simple text-based progress messages suitable for non-interactive\n  environments or log files\n- `progress.NewJSONWriter()` - Render events as JSON objects\n- `progress.NewQuietWriter()` - (Default) Silently processes events without producing any output\n\nUsing `EventProcessor`, a custom UI can be plugged into `docker/compose`.\n"
  },
  {
    "path": "docs/yaml/main/generate.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage main\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\n\tclidocstool \"github.com/docker/cli-docs-tool\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/cmd/compose\"\n)\n\nfunc generateDocs(opts *options) error {\n\tdockerCLI, err := command.NewDockerCli()\n\tif err != nil {\n\t\treturn err\n\t}\n\tcmd := &cobra.Command{\n\t\tUse:               \"docker\",\n\t\tDisableAutoGenTag: true,\n\t}\n\tcmd.AddCommand(compose.RootCommand(dockerCLI, nil))\n\tdisableFlagsInUseLine(cmd)\n\n\ttool, err := clidocstool.New(clidocstool.Options{\n\t\tRoot:      cmd,\n\t\tSourceDir: opts.source,\n\t\tTargetDir: opts.target,\n\t\tPlugin:    true,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\tfor _, format := range opts.formats {\n\t\tswitch format {\n\t\tcase \"yaml\":\n\t\t\tif err := tool.GenYamlTree(cmd); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\tcase \"md\":\n\t\t\tif err := tool.GenMarkdownTree(cmd); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\tdefault:\n\t\t\treturn fmt.Errorf(\"unknown format %q\", format)\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc disableFlagsInUseLine(cmd *cobra.Command) {\n\tvisitAll(cmd, func(ccmd *cobra.Command) {\n\t\t// do not add a `[flags]` to the end of the usage line.\n\t\tccmd.DisableFlagsInUseLine = true\n\t})\n}\n\n// visitAll will traverse all commands from the root.\n// This is different from the VisitAll of cobra.Command where only parents\n// are checked.\nfunc visitAll(root *cobra.Command, fn func(*cobra.Command)) {\n\tfor _, cmd := range root.Commands() {\n\t\tvisitAll(cmd, fn)\n\t}\n\tfn(root)\n}\n\ntype options struct {\n\tsource  string\n\ttarget  string\n\tformats []string\n}\n\nfunc main() {\n\tcwd, _ := os.Getwd()\n\topts := &options{\n\t\tsource:  filepath.Join(cwd, \"docs\", \"reference\"),\n\t\ttarget:  filepath.Join(cwd, \"docs\", \"reference\"),\n\t\tformats: []string{\"yaml\", \"md\"},\n\t}\n\tfmt.Printf(\"Project root: %s\\n\", opts.source)\n\tfmt.Printf(\"Generating yaml files into %s\\n\", opts.target)\n\tif err := generateDocs(opts); err != nil {\n\t\t_, _ = fmt.Fprintf(os.Stderr, \"Failed to generate documentation: %s\\n\", err.Error())\n\t}\n}\n"
  },
  {
    "path": "go.mod",
    "content": "module github.com/docker/compose/v5\n\ngo 1.25.0\n\nrequire (\n\tgithub.com/AlecAivazis/survey/v2 v2.3.7\n\tgithub.com/DefangLabs/secret-detector v0.0.0-20250403165618-22662109213e\n\tgithub.com/Microsoft/go-winio v0.6.2\n\tgithub.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d\n\tgithub.com/buger/goterm v1.0.4\n\tgithub.com/compose-spec/compose-go/v2 v2.10.1\n\tgithub.com/containerd/console v1.0.5\n\tgithub.com/containerd/containerd/v2 v2.2.2\n\tgithub.com/containerd/errdefs v1.0.0\n\tgithub.com/containerd/platforms v1.0.0-rc.2\n\tgithub.com/distribution/reference v0.6.0\n\tgithub.com/docker/buildx v0.31.1\n\tgithub.com/docker/cli v29.2.1+incompatible\n\tgithub.com/docker/cli-docs-tool v0.11.0\n\tgithub.com/docker/docker v28.5.2+incompatible\n\tgithub.com/docker/go-units v0.5.0\n\tgithub.com/eiannone/keyboard v0.0.0-20220611211555-0d226195f203\n\tgithub.com/fsnotify/fsevents v0.2.0\n\tgithub.com/go-viper/mapstructure/v2 v2.5.0\n\tgithub.com/google/go-cmp v0.7.0\n\tgithub.com/google/uuid v1.6.0\n\tgithub.com/hashicorp/go-version v1.8.0\n\tgithub.com/jonboulle/clockwork v0.5.0\n\tgithub.com/mattn/go-shellwords v1.0.12\n\tgithub.com/mitchellh/go-ps v1.0.0\n\tgithub.com/moby/buildkit v0.27.1\n\tgithub.com/moby/go-archive v0.2.0\n\tgithub.com/moby/moby/api v1.54.0\n\tgithub.com/moby/moby/client v0.3.0\n\tgithub.com/moby/patternmatcher v0.6.0\n\tgithub.com/moby/sys/atomicwriter v0.1.0\n\tgithub.com/morikuni/aec v1.1.0\n\tgithub.com/opencontainers/go-digest v1.0.0\n\tgithub.com/opencontainers/image-spec v1.1.1\n\tgithub.com/otiai10/copy v1.14.1\n\tgithub.com/sirupsen/logrus v1.9.4\n\tgithub.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966\n\tgithub.com/spf13/cobra v1.10.2\n\tgithub.com/spf13/pflag v1.0.10\n\tgithub.com/stretchr/testify v1.11.1\n\tgithub.com/tilt-dev/fsnotify v1.4.8-0.20220602155310-fff9c274a375\n\tgo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0\n\tgo.opentelemetry.io/otel v1.38.0\n\tgo.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0\n\tgo.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0\n\tgo.opentelemetry.io/otel/metric v1.38.0\n\tgo.opentelemetry.io/otel/sdk v1.38.0\n\tgo.opentelemetry.io/otel/trace v1.38.0\n\tgo.uber.org/goleak v1.3.0\n\tgo.uber.org/mock v0.6.0\n\tgo.yaml.in/yaml/v4 v4.0.0-rc.4\n\tgolang.org/x/sync v0.20.0\n\tgolang.org/x/sys v0.42.0\n\tgoogle.golang.org/grpc v1.78.0\n\tgotest.tools/v3 v3.5.2\n\ttags.cncf.io/container-device-interface v1.1.0\n)\n\nrequire (\n\tgithub.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect\n\tgithub.com/cenkalti/backoff/v5 v5.0.3 // indirect\n\tgithub.com/containerd/containerd/api v1.10.0 // indirect\n\tgithub.com/containerd/continuity v0.4.5 // indirect\n\tgithub.com/containerd/errdefs/pkg v0.3.0 // indirect\n\tgithub.com/containerd/log v0.1.0 // indirect\n\tgithub.com/containerd/ttrpc v1.2.7 // indirect\n\tgithub.com/containerd/typeurl/v2 v2.2.3 // indirect\n\tgithub.com/cpuguy83/go-md2man/v2 v2.0.7 // indirect\n\tgithub.com/davecgh/go-spew v1.1.1 // indirect\n\tgithub.com/docker/distribution v2.8.3+incompatible // indirect\n\tgithub.com/docker/docker-credential-helpers v0.9.5 // indirect\n\tgithub.com/docker/go-connections v0.6.0 // indirect\n\tgithub.com/felixge/httpsnoop v1.0.4 // indirect\n\tgithub.com/fvbommel/sortorder v1.1.0 // indirect\n\tgithub.com/go-logr/logr v1.4.3 // indirect\n\tgithub.com/go-logr/stdr v1.2.2 // indirect\n\tgithub.com/gofrs/flock v0.13.0 // indirect\n\tgithub.com/gogo/protobuf v1.3.2 // indirect\n\tgithub.com/golang-jwt/jwt/v5 v5.3.0 // indirect\n\tgithub.com/golang/protobuf v1.5.4 // indirect\n\tgithub.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect\n\tgithub.com/gorilla/mux v1.8.1 // indirect\n\tgithub.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 // indirect\n\tgithub.com/hashicorp/errwrap v1.1.0 // indirect\n\tgithub.com/hashicorp/go-multierror v1.1.1 // indirect\n\tgithub.com/in-toto/in-toto-golang v0.9.0 // indirect\n\tgithub.com/inconshreveable/mousetrap v1.1.0 // indirect\n\tgithub.com/inhies/go-bytesize v0.0.0-20220417184213-4913239db9cf // indirect\n\tgithub.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect\n\tgithub.com/klauspost/compress v1.18.3 // indirect\n\tgithub.com/mattn/go-colorable v0.1.14 // indirect\n\tgithub.com/mattn/go-isatty v0.0.20 // indirect\n\tgithub.com/mattn/go-runewidth v0.0.16 // indirect\n\tgithub.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b // indirect\n\tgithub.com/mitchellh/hashstructure/v2 v2.0.2 // indirect\n\tgithub.com/moby/docker-image-spec v1.3.1 // indirect\n\tgithub.com/moby/locker v1.0.1 // indirect\n\tgithub.com/moby/sys/capability v0.4.0 // indirect\n\tgithub.com/moby/sys/sequential v0.6.0 // indirect\n\tgithub.com/moby/sys/signal v0.7.1 // indirect\n\tgithub.com/moby/sys/symlink v0.3.0 // indirect\n\tgithub.com/moby/sys/user v0.4.0 // indirect\n\tgithub.com/moby/sys/userns v0.1.0 // indirect\n\tgithub.com/moby/term v0.5.2 // indirect\n\tgithub.com/otiai10/mint v1.6.3 // indirect\n\tgithub.com/pelletier/go-toml v1.9.5 // indirect\n\tgithub.com/pkg/errors v0.9.1 // indirect\n\tgithub.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect\n\tgithub.com/pmezard/go-difflib v1.0.0 // indirect\n\tgithub.com/rivo/uniseg v0.4.7 // indirect\n\tgithub.com/russross/blackfriday/v2 v2.1.0 // indirect\n\tgithub.com/santhosh-tekuri/jsonschema/v6 v6.0.1 // indirect\n\tgithub.com/secure-systems-lab/go-securesystemslib v0.9.1 // indirect\n\tgithub.com/shibumi/go-pathspec v1.3.0 // indirect\n\tgithub.com/sigstore/sigstore v1.10.0 // indirect\n\tgithub.com/sigstore/sigstore-go v1.1.4-0.20251124094504-b5fe07a5a7d7 // indirect\n\tgithub.com/tonistiigi/dchapes-mode v0.0.0-20250318174251-73d941a28323 // indirect\n\tgithub.com/tonistiigi/fsutil v0.0.0-20251211185533-a2aa163d723f // indirect\n\tgithub.com/tonistiigi/go-csvvalue v0.0.0-20240814133006-030d3b2625d0 // indirect\n\tgithub.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea // indirect\n\tgithub.com/tonistiigi/vt100 v0.0.0-20240514184818-90bafcd6abab // indirect\n\tgithub.com/xhit/go-str2duration/v2 v2.1.0 // indirect\n\tgo.opentelemetry.io/auto/sdk v1.2.1 // indirect\n\tgo.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.63.0 // indirect\n\tgo.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.63.0 // indirect\n\tgo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.38.0 // indirect\n\tgo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.38.0 // indirect\n\tgo.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 // indirect\n\tgo.opentelemetry.io/otel/sdk/metric v1.38.0 // indirect\n\tgo.opentelemetry.io/proto/otlp v1.7.1 // indirect\n\tgo.yaml.in/yaml/v3 v3.0.4 // indirect\n\tgolang.org/x/crypto v0.46.0 // indirect\n\tgolang.org/x/net v0.48.0 // indirect\n\tgolang.org/x/term v0.38.0 // indirect\n\tgolang.org/x/text v0.32.0 // indirect\n\tgolang.org/x/time v0.14.0 // indirect\n\tgoogle.golang.org/genproto/googleapis/api v0.0.0-20251103181224-f26f9409b101 // indirect\n\tgoogle.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect\n\tgoogle.golang.org/protobuf v1.36.11 // indirect\n\tgopkg.in/ini.v1 v1.67.0 // indirect\n\tgopkg.in/yaml.v3 v3.0.1 // indirect\n)\n\nexclude (\n\t// FIXME(thaJeztah): remove this once kubernetes updated their dependencies to no longer need this.\n\t//\n\t// For additional details, see this PR and links mentioned in that PR:\n\t// https://github.com/kubernetes-sigs/kustomize/pull/5830#issuecomment-2569960859\n\tgithub.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc\n\tgithub.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2\n)\n"
  },
  {
    "path": "go.sum",
    "content": "cyphar.com/go-pathrs v0.2.1 h1:9nx1vOgwVvX1mNBWDu93+vaceedpbsDqo+XuBGL40b8=\ncyphar.com/go-pathrs v0.2.1/go.mod h1:y8f1EMG7r+hCuFf/rXsKqMJrJAUoADZGNh5/vZPKcGc=\ngithub.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6 h1:He8afgbRMd7mFxO99hRNu+6tazq8nFF9lIwo9JFroBk=\ngithub.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=\ngithub.com/AlecAivazis/survey/v2 v2.3.7 h1:6I/u8FvytdGsgonrYsVn2t8t4QiRnh6QSTqkkhIiSjQ=\ngithub.com/AlecAivazis/survey/v2 v2.3.7/go.mod h1:xUTIdE4KCOIjsBAE1JYsUPoCqYdZ1reCfTwbto0Fduo=\ngithub.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=\ngithub.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=\ngithub.com/DefangLabs/secret-detector v0.0.0-20250403165618-22662109213e h1:rd4bOvKmDIx0WeTv9Qz+hghsgyjikFiPrseXHlKepO0=\ngithub.com/DefangLabs/secret-detector v0.0.0-20250403165618-22662109213e/go.mod h1:blbwPQh4DTlCZEfk1BLU4oMIhLda2U+A840Uag9DsZw=\ngithub.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=\ngithub.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=\ngithub.com/Microsoft/hcsshim v0.14.0-rc.1 h1:qAPXKwGOkVn8LlqgBN8GS0bxZ83hOJpcjxzmlQKxKsQ=\ngithub.com/Microsoft/hcsshim v0.14.0-rc.1/go.mod h1:hTKFGbnDtQb1wHiOWv4v0eN+7boSWAHyK/tNAaYZL0c=\ngithub.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=\ngithub.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=\ngithub.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d h1:licZJFw2RwpHMqeKTCYkitsPqHNxTmd4SNR5r94FGM8=\ngithub.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d/go.mod h1:asat636LX7Bqt5lYEZ27JNDcqxfjdBQuJ/MM4CN/Lzo=\ngithub.com/anchore/go-struct-converter v0.1.0 h1:2rDRssAl6mgKBSLNiVCMADgZRhoqtw9dedlWa0OhD30=\ngithub.com/anchore/go-struct-converter v0.1.0/go.mod h1:rYqSE9HbjzpHTI74vwPvae4ZVYZd1lue2ta6xHPdblA=\ngithub.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 h1:DklsrG3dyBCFEj5IhUbnKptjxatkF07cF2ak3yi77so=\ngithub.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw=\ngithub.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=\ngithub.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=\ngithub.com/blang/semver v3.5.1+incompatible h1:cQNTCjp13qL8KC3Nbxr/y2Bqb63oX6wdnnjpJbkM4JQ=\ngithub.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=\ngithub.com/buger/goterm v1.0.4 h1:Z9YvGmOih81P0FbVtEYTFF6YsSgxSUKEhf/f9bTMXbY=\ngithub.com/buger/goterm v1.0.4/go.mod h1:HiFWV3xnkolgrBV3mY8m0X0Pumt4zg4QhbdOzQtB8tE=\ngithub.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=\ngithub.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=\ngithub.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=\ngithub.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=\ngithub.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb h1:EDmT6Q9Zs+SbUoc7Ik9EfrFqcylYqgPZ9ANSbTAntnE=\ngithub.com/codahale/rfc6979 v0.0.0-20141003034818-6a90f24967eb/go.mod h1:ZjrT6AXHbDs86ZSdt/osfBi5qfexBrKUdONk989Wnk4=\ngithub.com/compose-spec/compose-go/v2 v2.10.1 h1:mFbXobojGRFIVi1UknrvaDAZ+PkJfyjqkA1yseh+vAU=\ngithub.com/compose-spec/compose-go/v2 v2.10.1/go.mod h1:Ohac1SzhO/4fXXrzWIztIVB6ckmKBv1Nt5Z5mGVESUg=\ngithub.com/containerd/cgroups/v3 v3.1.2 h1:OSosXMtkhI6Qove637tg1XgK4q+DhR0mX8Wi8EhrHa4=\ngithub.com/containerd/cgroups/v3 v3.1.2/go.mod h1:PKZ2AcWmSBsY/tJUVhtS/rluX0b1uq1GmPO1ElCmbOw=\ngithub.com/containerd/console v1.0.5 h1:R0ymNeydRqH2DmakFNdmjR2k0t7UPuiOV/N/27/qqsc=\ngithub.com/containerd/console v1.0.5/go.mod h1:YynlIjWYF8myEu6sdkwKIvGQq+cOckRm6So2avqoYAk=\ngithub.com/containerd/containerd/api v1.10.0 h1:5n0oHYVBwN4VhoX9fFykCV9dF1/BvAXeg2F8W6UYq1o=\ngithub.com/containerd/containerd/api v1.10.0/go.mod h1:NBm1OAk8ZL+LG8R0ceObGxT5hbUYj7CzTmR3xh0DlMM=\ngithub.com/containerd/containerd/v2 v2.2.2 h1:mjVQdtfryzT7lOqs5EYUFZm8ioPVjOpkSoG1GJPxEMY=\ngithub.com/containerd/containerd/v2 v2.2.2/go.mod h1:5Jhevmv6/2J+Iu/A2xXAdUIdI5Ah/hfyO7okJ4AFIdY=\ngithub.com/containerd/continuity v0.4.5 h1:ZRoN1sXq9u7V6QoHMcVWGhOwDFqZ4B9i5H6un1Wh0x4=\ngithub.com/containerd/continuity v0.4.5/go.mod h1:/lNJvtJKUQStBzpVQ1+rasXO1LAWtUQssk28EZvJ3nE=\ngithub.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=\ngithub.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=\ngithub.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=\ngithub.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk=\ngithub.com/containerd/fifo v1.1.0 h1:4I2mbh5stb1u6ycIABlBw9zgtlK8viPI9QkQNRQEEmY=\ngithub.com/containerd/fifo v1.1.0/go.mod h1:bmC4NWMbXlt2EZ0Hc7Fx7QzTFxgPID13eH0Qu+MAb2o=\ngithub.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=\ngithub.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=\ngithub.com/containerd/nydus-snapshotter v0.15.10 h1:hphjuKOqSHLGznNJiAvmsOWkdu4qFXjf4DzGrWSuIsM=\ngithub.com/containerd/nydus-snapshotter v0.15.10/go.mod h1:EWRd/QJ0b6UKHAqYgiV5gHlqLC2qq5cQiSlXEdVovrA=\ngithub.com/containerd/platforms v1.0.0-rc.2 h1:0SPgaNZPVWGEi4grZdV8VRYQn78y+nm6acgLGv/QzE4=\ngithub.com/containerd/platforms v1.0.0-rc.2/go.mod h1:J71L7B+aiM5SdIEqmd9wp6THLVRzJGXfNuWCZCllLA4=\ngithub.com/containerd/plugin v1.0.0 h1:c8Kf1TNl6+e2TtMHZt+39yAPDbouRH9WAToRjex483Y=\ngithub.com/containerd/plugin v1.0.0/go.mod h1:hQfJe5nmWfImiqT1q8Si3jLv3ynMUIBB47bQ+KexvO8=\ngithub.com/containerd/stargz-snapshotter v0.17.0 h1:djNS4KU8ztFhLdEDZ1bsfzOiYuVHT6TgSU5qwRk+cNc=\ngithub.com/containerd/stargz-snapshotter/estargz v0.17.0 h1:+TyQIsR/zSFI1Rm31EQBwpAA1ovYgIKHy7kctL3sLcE=\ngithub.com/containerd/stargz-snapshotter/estargz v0.17.0/go.mod h1:s06tWAiJcXQo9/8AReBCIo/QxcXFZ2n4qfsRnpl71SM=\ngithub.com/containerd/ttrpc v1.2.7 h1:qIrroQvuOL9HQ1X6KHe2ohc7p+HP/0VE6XPU7elJRqQ=\ngithub.com/containerd/ttrpc v1.2.7/go.mod h1:YCXHsb32f+Sq5/72xHubdiJRQY9inL4a4ZQrAbN1q9o=\ngithub.com/containerd/typeurl/v2 v2.2.3 h1:yNA/94zxWdvYACdYO8zofhrTVuQY73fFU1y++dYSw40=\ngithub.com/containerd/typeurl/v2 v2.2.3/go.mod h1:95ljDnPfD3bAbDJRugOiShd/DlAAsxGtUBhJxIn7SCk=\ngithub.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=\ngithub.com/cpuguy83/go-md2man/v2 v2.0.7 h1:zbFlGlXEAKlwXpmvle3d8Oe3YnkKIK4xSRTd3sHPnBo=\ngithub.com/cpuguy83/go-md2man/v2 v2.0.7/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=\ngithub.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=\ngithub.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=\ngithub.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=\ngithub.com/cyberphone/json-canonicalization v0.0.0-20241213102144-19d51d7fe467 h1:uX1JmpONuD549D73r6cgnxyUu18Zb7yHAy5AYU0Pm4Q=\ngithub.com/cyberphone/json-canonicalization v0.0.0-20241213102144-19d51d7fe467/go.mod h1:uzvlm1mxhHkdfqitSA92i7Se+S9ksOn3a3qmv/kyOCw=\ngithub.com/cyphar/filepath-securejoin v0.6.0 h1:BtGB77njd6SVO6VztOHfPxKitJvd/VPT+OFBFMOi1Is=\ngithub.com/cyphar/filepath-securejoin v0.6.0/go.mod h1:A8hd4EnAeyujCJRrICiOWqjS1AX0a9kM5XL+NwKoYSc=\ngithub.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=\ngithub.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/digitorus/pkcs7 v0.0.0-20230818184609-3a137a874352 h1:ge14PCmCvPjpMQMIAH7uKg0lrtNSOdpYsRXlwk3QbaE=\ngithub.com/digitorus/pkcs7 v0.0.0-20230818184609-3a137a874352/go.mod h1:SKVExuS+vpu2l9IoOc0RwqE7NYnb0JlcFHFnEJkVDzc=\ngithub.com/digitorus/timestamp v0.0.0-20231217203849-220c5c2851b7 h1:lxmTCgmHE1GUYL7P0MlNa00M67axePTq+9nBSGddR8I=\ngithub.com/digitorus/timestamp v0.0.0-20231217203849-220c5c2851b7/go.mod h1:GvWntX9qiTlOud0WkQ6ewFm0LPy5JUR1Xo0Ngbd1w6Y=\ngithub.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=\ngithub.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=\ngithub.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI=\ngithub.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=\ngithub.com/docker/buildx v0.31.1 h1:zbvbrb9nxBNVV8nnI33f2F+4aAZBA1gY+AmeBFflMqY=\ngithub.com/docker/buildx v0.31.1/go.mod h1:SD+jYLnt3S4SXqohVtV+8z+dihnOgwMJ8t+bLQvsaCk=\ngithub.com/docker/cli v29.2.1+incompatible h1:n3Jt0QVCN65eiVBoUTZQM9mcQICCJt3akW4pKAbKdJg=\ngithub.com/docker/cli v29.2.1+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=\ngithub.com/docker/cli-docs-tool v0.11.0 h1:7d8QARFb7QEobizqxmEM7fOteZEHwH/zWgHQtHZEcfE=\ngithub.com/docker/cli-docs-tool v0.11.0/go.mod h1:ma8BKiisUo8D6W05XEYIh3oa1UbgrZhi1nowyKFJa8Q=\ngithub.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk=\ngithub.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=\ngithub.com/docker/docker v28.5.2+incompatible h1:DBX0Y0zAjZbSrm1uzOkdr1onVghKaftjlSWt4AFexzM=\ngithub.com/docker/docker v28.5.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=\ngithub.com/docker/docker-credential-helpers v0.9.5 h1:EFNN8DHvaiK8zVqFA2DT6BjXE0GzfLOZ38ggPTKePkY=\ngithub.com/docker/docker-credential-helpers v0.9.5/go.mod h1:v1S+hepowrQXITkEfw6o4+BMbGot02wiKpzWhGUZK6c=\ngithub.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=\ngithub.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=\ngithub.com/docker/go-metrics v0.0.1 h1:AgB/0SvBxihN0X8OR4SjsblXkbMvalQ8cjmtKQ2rQV8=\ngithub.com/docker/go-metrics v0.0.1/go.mod h1:cG1hvH2utMXtqgqqYE9plW6lDxS3/5ayHzueweSI3Vw=\ngithub.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=\ngithub.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=\ngithub.com/eiannone/keyboard v0.0.0-20220611211555-0d226195f203 h1:XBBHcIb256gUJtLmY22n99HaZTz+r2Z51xUPi01m3wg=\ngithub.com/eiannone/keyboard v0.0.0-20220611211555-0d226195f203/go.mod h1:E1jcSv8FaEny+OP/5k9UxZVw9YFWGj7eI4KR/iOBqCg=\ngithub.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=\ngithub.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=\ngithub.com/fsnotify/fsevents v0.2.0 h1:BRlvlqjvNTfogHfeBOFvSC9N0Ddy+wzQCQukyoD7o/c=\ngithub.com/fsnotify/fsevents v0.2.0/go.mod h1:B3eEk39i4hz8y1zaWS/wPrAP4O6wkIl7HQwKBr1qH/w=\ngithub.com/fvbommel/sortorder v1.1.0 h1:fUmoe+HLsBTctBDoaBwpQo5N+nrCp8g/BjKb/6ZQmYw=\ngithub.com/fvbommel/sortorder v1.1.0/go.mod h1:uk88iVf1ovNn1iLfgUVU2F9o5eO30ui720w+kxuqRs0=\ngithub.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=\ngithub.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=\ngithub.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=\ngithub.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=\ngithub.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=\ngithub.com/go-openapi/analysis v0.24.1 h1:Xp+7Yn/KOnVWYG8d+hPksOYnCYImE3TieBa7rBOesYM=\ngithub.com/go-openapi/analysis v0.24.1/go.mod h1:dU+qxX7QGU1rl7IYhBC8bIfmWQdX4Buoea4TGtxXY84=\ngithub.com/go-openapi/errors v0.22.4 h1:oi2K9mHTOb5DPW2Zjdzs/NIvwi2N3fARKaTJLdNabaM=\ngithub.com/go-openapi/errors v0.22.4/go.mod h1:z9S8ASTUqx7+CP1Q8dD8ewGH/1JWFFLX/2PmAYNQLgk=\ngithub.com/go-openapi/jsonpointer v0.22.1 h1:sHYI1He3b9NqJ4wXLoJDKmUmHkWy/L7rtEo92JUxBNk=\ngithub.com/go-openapi/jsonpointer v0.22.1/go.mod h1:pQT9OsLkfz1yWoMgYFy4x3U5GY5nUlsOn1qSBH5MkCM=\ngithub.com/go-openapi/jsonreference v0.21.3 h1:96Dn+MRPa0nYAR8DR1E03SblB5FJvh7W6krPI0Z7qMc=\ngithub.com/go-openapi/jsonreference v0.21.3/go.mod h1:RqkUP0MrLf37HqxZxrIAtTWW4ZJIK1VzduhXYBEeGc4=\ngithub.com/go-openapi/loads v0.23.2 h1:rJXAcP7g1+lWyBHC7iTY+WAF0rprtM+pm8Jxv1uQJp4=\ngithub.com/go-openapi/loads v0.23.2/go.mod h1:IEVw1GfRt/P2Pplkelxzj9BYFajiWOtY2nHZNj4UnWY=\ngithub.com/go-openapi/runtime v0.29.2 h1:UmwSGWNmWQqKm1c2MGgXVpC2FTGwPDQeUsBMufc5Yj0=\ngithub.com/go-openapi/runtime v0.29.2/go.mod h1:biq5kJXRJKBJxTDJXAa00DOTa/anflQPhT0/wmjuy+0=\ngithub.com/go-openapi/spec v0.22.1 h1:beZMa5AVQzRspNjvhe5aG1/XyBSMeX1eEOs7dMoXh/k=\ngithub.com/go-openapi/spec v0.22.1/go.mod h1:c7aeIQT175dVowfp7FeCvXXnjN/MrpaONStibD2WtDA=\ngithub.com/go-openapi/strfmt v0.25.0 h1:7R0RX7mbKLa9EYCTHRcCuIPcaqlyQiWNPTXwClK0saQ=\ngithub.com/go-openapi/strfmt v0.25.0/go.mod h1:nNXct7OzbwrMY9+5tLX4I21pzcmE6ccMGXl3jFdPfn8=\ngithub.com/go-openapi/swag v0.25.3 h1:FAa5wJXyDtI7yUztKDfZxDrSx+8WTg31MfCQ9s3PV+s=\ngithub.com/go-openapi/swag v0.25.3/go.mod h1:tX9vI8Mj8Ny+uCEk39I1QADvIPI7lkndX4qCsEqhkS8=\ngithub.com/go-openapi/swag/cmdutils v0.25.3 h1:EIwGxN143JCThNHnqfqs85R8lJcJG06qjJRZp3VvjLI=\ngithub.com/go-openapi/swag/cmdutils v0.25.3/go.mod h1:pdae/AFo6WxLl5L0rq87eRzVPm/XRHM3MoYgRMvG4A0=\ngithub.com/go-openapi/swag/conv v0.25.3 h1:PcB18wwfba7MN5BVlBIV+VxvUUeC2kEuCEyJ2/t2X7E=\ngithub.com/go-openapi/swag/conv v0.25.3/go.mod h1:n4Ibfwhn8NJnPXNRhBO5Cqb9ez7alBR40JS4rbASUPU=\ngithub.com/go-openapi/swag/fileutils v0.25.3 h1:P52Uhd7GShkeU/a1cBOuqIcHMHBrA54Z2t5fLlE85SQ=\ngithub.com/go-openapi/swag/fileutils v0.25.3/go.mod h1:cdOT/PKbwcysVQ9Tpr0q20lQKH7MGhOEb6EwmHOirUk=\ngithub.com/go-openapi/swag/jsonname v0.25.3 h1:U20VKDS74HiPaLV7UZkztpyVOw3JNVsit+w+gTXRj0A=\ngithub.com/go-openapi/swag/jsonname v0.25.3/go.mod h1:GPVEk9CWVhNvWhZgrnvRA6utbAltopbKwDu8mXNUMag=\ngithub.com/go-openapi/swag/jsonutils v0.25.3 h1:kV7wer79KXUM4Ea4tBdAVTU842Rg6tWstX3QbM4fGdw=\ngithub.com/go-openapi/swag/jsonutils v0.25.3/go.mod h1:ILcKqe4HC1VEZmJx51cVuZQ6MF8QvdfXsQfiaCs0z9o=\ngithub.com/go-openapi/swag/loading v0.25.3 h1:Nn65Zlzf4854MY6Ft0JdNrtnHh2bdcS/tXckpSnOb2Y=\ngithub.com/go-openapi/swag/loading v0.25.3/go.mod h1:xajJ5P4Ang+cwM5gKFrHBgkEDWfLcsAKepIuzTmOb/c=\ngithub.com/go-openapi/swag/mangling v0.25.3 h1:rGIrEzXaYWuUW1MkFmG3pcH+EIA0/CoUkQnIyB6TUyo=\ngithub.com/go-openapi/swag/mangling v0.25.3/go.mod h1:6dxwu6QyORHpIIApsdZgb6wBk/DPU15MdyYj/ikn0Hg=\ngithub.com/go-openapi/swag/netutils v0.25.3 h1:XWXHZfL/65ABiv8rvGp9dtE0C6QHTYkCrNV77jTl358=\ngithub.com/go-openapi/swag/netutils v0.25.3/go.mod h1:m2W8dtdaoX7oj9rEttLyTeEFFEBvnAx9qHd5nJEBzYg=\ngithub.com/go-openapi/swag/stringutils v0.25.3 h1:nAmWq1fUTWl/XiaEPwALjp/8BPZJun70iDHRNq/sH6w=\ngithub.com/go-openapi/swag/stringutils v0.25.3/go.mod h1:GTsRvhJW5xM5gkgiFe0fV3PUlFm0dr8vki6/VSRaZK0=\ngithub.com/go-openapi/swag/typeutils v0.25.3 h1:2w4mEEo7DQt3V4veWMZw0yTPQibiL3ri2fdDV4t2TQc=\ngithub.com/go-openapi/swag/typeutils v0.25.3/go.mod h1:Ou7g//Wx8tTLS9vG0UmzfCsjZjKhpjxayRKTHXf2pTE=\ngithub.com/go-openapi/swag/yamlutils v0.25.3 h1:LKTJjCn/W1ZfMec0XDL4Vxh8kyAnv1orH5F2OREDUrg=\ngithub.com/go-openapi/swag/yamlutils v0.25.3/go.mod h1:Y7QN6Wc5DOBXK14/xeo1cQlq0EA0wvLoSv13gDQoCao=\ngithub.com/go-openapi/validate v0.25.1 h1:sSACUI6Jcnbo5IWqbYHgjibrhhmt3vR6lCzKZnmAgBw=\ngithub.com/go-openapi/validate v0.25.1/go.mod h1:RMVyVFYte0gbSTaZ0N4KmTn6u/kClvAFp+mAVfS/DQc=\ngithub.com/go-viper/mapstructure/v2 v2.5.0 h1:vM5IJoUAy3d7zRSVtIwQgBj7BiWtMPfmPEgAXnvj1Ro=\ngithub.com/go-viper/mapstructure/v2 v2.5.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=\ngithub.com/gofrs/flock v0.13.0 h1:95JolYOvGMqeH31+FC7D2+uULf6mG61mEZ/A8dRYMzw=\ngithub.com/gofrs/flock v0.13.0/go.mod h1:jxeyy9R1auM5S6JYDBhDt+E2TCo7DkratH4Pgi8P+Z0=\ngithub.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=\ngithub.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=\ngithub.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=\ngithub.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=\ngithub.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ=\ngithub.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw=\ngithub.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=\ngithub.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=\ngithub.com/google/certificate-transparency-go v1.3.2 h1:9ahSNZF2o7SYMaKaXhAumVEzXB2QaayzII9C8rv7v+A=\ngithub.com/google/certificate-transparency-go v1.3.2/go.mod h1:H5FpMUaGa5Ab2+KCYsxg6sELw3Flkl7pGZzWdBoYLXs=\ngithub.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=\ngithub.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=\ngithub.com/google/go-containerregistry v0.20.6 h1:cvWX87UxxLgaH76b4hIvya6Dzz9qHB31qAwjAohdSTU=\ngithub.com/google/go-containerregistry v0.20.6/go.mod h1:T0x8MuoAoKX/873bkeSfLD2FAkwCDf9/HZgsFJ02E2Y=\ngithub.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=\ngithub.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=\ngithub.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=\ngithub.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=\ngithub.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=\ngithub.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=\ngithub.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 h1:NmZ1PKzSTQbuGHw9DGPFomqkkLWMC+vZCkfs+FHv1Vg=\ngithub.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3/go.mod h1:zQrxl1YP88HQlA6i9c63DSVPFklWpGX4OWAc9bFuaH4=\ngithub.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=\ngithub.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=\ngithub.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=\ngithub.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=\ngithub.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=\ngithub.com/hashicorp/go-version v1.8.0 h1:KAkNb1HAiZd1ukkxDFGmokVZe1Xy9HG6NUp+bPle2i4=\ngithub.com/hashicorp/go-version v1.8.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=\ngithub.com/hinshun/vt10x v0.0.0-20220119200601-820417d04eec h1:qv2VnGeEQHchGaZ/u7lxST/RaJw+cv273q79D81Xbog=\ngithub.com/hinshun/vt10x v0.0.0-20220119200601-820417d04eec/go.mod h1:Q48J4R4DvxnHolD5P8pOtXigYlRuPLGl6moFx3ulM68=\ngithub.com/in-toto/attestation v1.1.2 h1:MBFn6lsMq6dptQZJBhalXTcWMb/aJy3V+GX3VYj/V1E=\ngithub.com/in-toto/attestation v1.1.2/go.mod h1:gYFddHMZj3DiQ0b62ltNi1Vj5rC879bTmBbrv9CRHpM=\ngithub.com/in-toto/in-toto-golang v0.9.0 h1:tHny7ac4KgtsfrG6ybU8gVOZux2H8jN05AXJ9EBM1XU=\ngithub.com/in-toto/in-toto-golang v0.9.0/go.mod h1:xsBVrVsHNsB61++S6Dy2vWosKhuA3lUTQd+eF9HdeMo=\ngithub.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=\ngithub.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=\ngithub.com/inhies/go-bytesize v0.0.0-20220417184213-4913239db9cf h1:FtEj8sfIcaaBfAKrE1Cwb61YDtYq9JxChK1c7AKce7s=\ngithub.com/inhies/go-bytesize v0.0.0-20220417184213-4913239db9cf/go.mod h1:yrqSXGoD/4EKfF26AOGzscPOgTTJcyAwM2rpixWT+t4=\ngithub.com/jonboulle/clockwork v0.5.0 h1:Hyh9A8u51kptdkR+cqRpT1EebBwTn1oK9YfGYbdFz6I=\ngithub.com/jonboulle/clockwork v0.5.0/go.mod h1:3mZlmanh0g2NDKO5TWZVJAfofYk64M7XN3SzBPjZF60=\ngithub.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs=\ngithub.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8=\ngithub.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=\ngithub.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=\ngithub.com/klauspost/compress v1.18.3 h1:9PJRvfbmTabkOX8moIpXPbMMbYN60bWImDDU7L+/6zw=\ngithub.com/klauspost/compress v1.18.3/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=\ngithub.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=\ngithub.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=\ngithub.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=\ngithub.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=\ngithub.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=\ngithub.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=\ngithub.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=\ngithub.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=\ngithub.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=\ngithub.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=\ngithub.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=\ngithub.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=\ngithub.com/mattn/go-shellwords v1.0.12 h1:M2zGm7EW6UQJvDeQxo4T51eKPurbeFbe8WtebGE2xrk=\ngithub.com/mattn/go-shellwords v1.0.12/go.mod h1:EZzvwXDESEeg03EKmM+RmDnNOPKG4lLtQsUlTZDWQ8Y=\ngithub.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b h1:j7+1HpAFS1zy5+Q4qx1fWh90gTKwiN4QCGoY9TWyyO4=\ngithub.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=\ngithub.com/mitchellh/go-ps v1.0.0 h1:i6ampVEEF4wQFF+bkYfwYgY+F/uYJDktmvLPf7qIgjc=\ngithub.com/mitchellh/go-ps v1.0.0/go.mod h1:J4lOc8z8yJs6vUwklHw2XEIiT4z4C40KtWVN3nvg8Pg=\ngithub.com/mitchellh/hashstructure/v2 v2.0.2 h1:vGKWl0YJqUNxE8d+h8f6NJLcCJrgbhC4NcD46KavDd4=\ngithub.com/mitchellh/hashstructure/v2 v2.0.2/go.mod h1:MG3aRVU/N29oo/V/IhBX8GR/zz4kQkprJgF2EVszyDE=\ngithub.com/moby/buildkit v0.27.1 h1:qlIWpnZzqCkrYiGkctM1gBD/YZPOJTjtUdRBlI0oBOU=\ngithub.com/moby/buildkit v0.27.1/go.mod h1:99qLrCrIAFgEOiFnCi9Y0Wwp6/qA7QvZ3uq/6wF0IsI=\ngithub.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=\ngithub.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=\ngithub.com/moby/go-archive v0.2.0 h1:zg5QDUM2mi0JIM9fdQZWC7U8+2ZfixfTYoHL7rWUcP8=\ngithub.com/moby/go-archive v0.2.0/go.mod h1:mNeivT14o8xU+5q1YnNrkQVpK+dnNe/K6fHqnTg4qPU=\ngithub.com/moby/locker v1.0.1 h1:fOXqR41zeveg4fFODix+1Ch4mj/gT0NE1XJbp/epuBg=\ngithub.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=\ngithub.com/moby/moby/api v1.54.0 h1:7kbUgyiKcoBhm0UrWbdrMs7RX8dnwzURKVbZGy2GnL0=\ngithub.com/moby/moby/api v1.54.0/go.mod h1:8mb+ReTlisw4pS6BRzCMts5M49W5M7bKt1cJy/YbAqc=\ngithub.com/moby/moby/client v0.3.0 h1:UUGL5okry+Aomj3WhGt9Aigl3ZOxZGqR7XPo+RLPlKs=\ngithub.com/moby/moby/client v0.3.0/go.mod h1:HJgFbJRvogDQjbM8fqc1MCEm4mIAGMLjXbgwoZp6jCQ=\ngithub.com/moby/patternmatcher v0.6.0 h1:GmP9lR19aU5GqSSFko+5pRqHi+Ohk1O69aFiKkVGiPk=\ngithub.com/moby/patternmatcher v0.6.0/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc=\ngithub.com/moby/sys/atomicwriter v0.1.0 h1:kw5D/EqkBwsBFi0ss9v1VG3wIkVhzGvLklJ+w3A14Sw=\ngithub.com/moby/sys/atomicwriter v0.1.0/go.mod h1:Ul8oqv2ZMNHOceF643P6FKPXeCmYtlQMvpizfsSoaWs=\ngithub.com/moby/sys/capability v0.4.0 h1:4D4mI6KlNtWMCM1Z/K0i7RV1FkX+DBDHKVJpCndZoHk=\ngithub.com/moby/sys/capability v0.4.0/go.mod h1:4g9IK291rVkms3LKCDOoYlnV8xKwoDTpIrNEE35Wq0I=\ngithub.com/moby/sys/mountinfo v0.7.2 h1:1shs6aH5s4o5H2zQLn796ADW1wMrIwHsyJ2v9KouLrg=\ngithub.com/moby/sys/mountinfo v0.7.2/go.mod h1:1YOa8w8Ih7uW0wALDUgT1dTTSBrZ+HiBLGws92L2RU4=\ngithub.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=\ngithub.com/moby/sys/sequential v0.6.0/go.mod h1:uyv8EUTrca5PnDsdMGXhZe6CCe8U/UiTWd+lL+7b/Ko=\ngithub.com/moby/sys/signal v0.7.1 h1:PrQxdvxcGijdo6UXXo/lU/TvHUWyPhj7UOpSo8tuvk0=\ngithub.com/moby/sys/signal v0.7.1/go.mod h1:Se1VGehYokAkrSQwL4tDzHvETwUZlnY7S5XtQ50mQp8=\ngithub.com/moby/sys/symlink v0.3.0 h1:GZX89mEZ9u53f97npBy4Rc3vJKj7JBDj/PN2I22GrNU=\ngithub.com/moby/sys/symlink v0.3.0/go.mod h1:3eNdhduHmYPcgsJtZXW1W4XUJdZGBIkttZ8xKqPUJq0=\ngithub.com/moby/sys/user v0.4.0 h1:jhcMKit7SA80hivmFJcbB1vqmw//wU61Zdui2eQXuMs=\ngithub.com/moby/sys/user v0.4.0/go.mod h1:bG+tYYYJgaMtRKgEmuueC0hJEAZWwtIbZTB+85uoHjs=\ngithub.com/moby/sys/userns v0.1.0 h1:tVLXkFOxVu9A64/yh59slHVv9ahO9UIev4JZusOLG/g=\ngithub.com/moby/sys/userns v0.1.0/go.mod h1:IHUYgu/kao6N8YZlp9Cf444ySSvCmDlmzUcYfDHOl28=\ngithub.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=\ngithub.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=\ngithub.com/morikuni/aec v1.1.0 h1:vBBl0pUnvi/Je71dsRrhMBtreIqNMYErSAbEeb8jrXQ=\ngithub.com/morikuni/aec v1.1.0/go.mod h1:xDRgiq/iw5l+zkao76YTKzKttOp2cwPEne25HDkJnBw=\ngithub.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=\ngithub.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=\ngithub.com/oklog/ulid v1.3.1 h1:EGfNDEx6MqHz8B3uNV6QAib1UR2Lm97sHi3ocA6ESJ4=\ngithub.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=\ngithub.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=\ngithub.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=\ngithub.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=\ngithub.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=\ngithub.com/opencontainers/runtime-spec v1.3.0 h1:YZupQUdctfhpZy3TM39nN9Ika5CBWT5diQ8ibYCRkxg=\ngithub.com/opencontainers/runtime-spec v1.3.0/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=\ngithub.com/opencontainers/selinux v1.13.1 h1:A8nNeceYngH9Ow++M+VVEwJVpdFmrlxsN22F+ISDCJE=\ngithub.com/opencontainers/selinux v1.13.1/go.mod h1:S10WXZ/osk2kWOYKy1x2f/eXF5ZHJoUs8UU/2caNRbg=\ngithub.com/otiai10/copy v1.14.1 h1:5/7E6qsUMBaH5AnQ0sSLzzTg1oTECmcCmT6lvF45Na8=\ngithub.com/otiai10/copy v1.14.1/go.mod h1:oQwrEDDOci3IM8dJF0d8+jnbfPDllW6vUjNc3DoZm9I=\ngithub.com/otiai10/mint v1.6.3 h1:87qsV/aw1F5as1eH1zS/yqHY85ANKVMgkDrf9rcxbQs=\ngithub.com/otiai10/mint v1.6.3/go.mod h1:MJm72SBthJjz8qhefc4z1PYEieWmy8Bku7CjcAqyUSM=\ngithub.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8=\ngithub.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=\ngithub.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=\ngithub.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=\ngithub.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=\ngithub.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=\ngithub.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=\ngithub.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=\ngithub.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=\ngithub.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=\ngithub.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=\ngithub.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs=\ngithub.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA=\ngithub.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0=\ngithub.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw=\ngithub.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=\ngithub.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=\ngithub.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=\ngithub.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=\ngithub.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=\ngithub.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=\ngithub.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=\ngithub.com/santhosh-tekuri/jsonschema/v6 v6.0.1 h1:PKK9DyHxif4LZo+uQSgXNqs0jj5+xZwwfKHgph2lxBw=\ngithub.com/santhosh-tekuri/jsonschema/v6 v6.0.1/go.mod h1:JXeL+ps8p7/KNMjDQk3TCwPpBy0wYklyWTfbkIzdIFU=\ngithub.com/secure-systems-lab/go-securesystemslib v0.9.1 h1:nZZaNz4DiERIQguNy0cL5qTdn9lR8XKHf4RUyG1Sx3g=\ngithub.com/secure-systems-lab/go-securesystemslib v0.9.1/go.mod h1:np53YzT0zXGMv6x4iEWc9Z59uR+x+ndLwCLqPYpLXVU=\ngithub.com/shibumi/go-pathspec v1.3.0 h1:QUyMZhFo0Md5B8zV8x2tesohbb5kfbpTi9rBnKh5dkI=\ngithub.com/shibumi/go-pathspec v1.3.0/go.mod h1:Xutfslp817l2I1cZvgcfeMQJG5QnU2lh5tVaaMCl3jE=\ngithub.com/sigstore/protobuf-specs v0.5.0 h1:F8YTI65xOHw70NrvPwJ5PhAzsvTnuJMGLkA4FIkofAY=\ngithub.com/sigstore/protobuf-specs v0.5.0/go.mod h1:+gXR+38nIa2oEupqDdzg4qSBT0Os+sP7oYv6alWewWc=\ngithub.com/sigstore/rekor v1.4.3 h1:2+aw4Gbgumv8vYM/QVg6b+hvr4x4Cukur8stJrVPKU0=\ngithub.com/sigstore/rekor v1.4.3/go.mod h1:o0zgY087Q21YwohVvGwV9vK1/tliat5mfnPiVI3i75o=\ngithub.com/sigstore/rekor-tiles/v2 v2.0.1 h1:1Wfz15oSRNGF5Dzb0lWn5W8+lfO50ork4PGIfEKjZeo=\ngithub.com/sigstore/rekor-tiles/v2 v2.0.1/go.mod h1:Pjsbhzj5hc3MKY8FfVTYHBUHQEnP0ozC4huatu4x7OU=\ngithub.com/sigstore/sigstore v1.10.0 h1:lQrmdzqlR8p9SCfWIpFoGUqdXEzJSZT2X+lTXOMPaQI=\ngithub.com/sigstore/sigstore v1.10.0/go.mod h1:Ygq+L/y9Bm3YnjpJTlQrOk/gXyrjkpn3/AEJpmk1n9Y=\ngithub.com/sigstore/sigstore-go v1.1.4-0.20251124094504-b5fe07a5a7d7 h1:94NLPmq4bxvdmslzcG670IOkrlS98CGpmob8cjpFHuI=\ngithub.com/sigstore/sigstore-go v1.1.4-0.20251124094504-b5fe07a5a7d7/go.mod h1:4r/PNX0G7uzkLpc3PSdYs5E2k4bWEJNXTK6kwAyw9TM=\ngithub.com/sigstore/timestamp-authority/v2 v2.0.2 h1:WavlEeLh6HKt+osbmsHDg6/FaM/8Pz9iVUMh9pAsl/o=\ngithub.com/sigstore/timestamp-authority/v2 v2.0.2/go.mod h1:D+wbQg8ASQzKnwBhLo7rIJD+9Zev4Ppqd4myPe8k57E=\ngithub.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=\ngithub.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=\ngithub.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966 h1:JIAuq3EEf9cgbU6AtGPK4CTG3Zf6CKMNqf0MHTggAUA=\ngithub.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966/go.mod h1:sUM3LWHvSMaG192sy56D9F7CNvL7jUJVXoqM1QKLnog=\ngithub.com/spdx/tools-golang v0.5.7 h1:+sWcKGnhwp3vLdMqPcLdA6QK679vd86cK9hQWH3AwCg=\ngithub.com/spdx/tools-golang v0.5.7/go.mod h1:jg7w0LOpoNAw6OxKEzCoqPC2GCTj45LyTlVmXubDsYw=\ngithub.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=\ngithub.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=\ngithub.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=\ngithub.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=\ngithub.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=\ngithub.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=\ngithub.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=\ngithub.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=\ngithub.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=\ngithub.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=\ngithub.com/theupdateframework/go-tuf v0.7.0 h1:CqbQFrWo1ae3/I0UCblSbczevCCbS31Qvs5LdxRWqRI=\ngithub.com/theupdateframework/go-tuf/v2 v2.3.0 h1:gt3X8xT8qu/HT4w+n1jgv+p7koi5ad8XEkLXXZqG9AA=\ngithub.com/theupdateframework/go-tuf/v2 v2.3.0/go.mod h1:xW8yNvgXRncmovMLvBxKwrKpsOwJZu/8x+aB0KtFcdw=\ngithub.com/tilt-dev/fsnotify v1.4.8-0.20220602155310-fff9c274a375 h1:QB54BJwA6x8QU9nHY3xJSZR2kX9bgpZekRKGkLTmEXA=\ngithub.com/tilt-dev/fsnotify v1.4.8-0.20220602155310-fff9c274a375/go.mod h1:xRroudyp5iVtxKqZCrA6n2TLFRBf8bmnjr1UD4x+z7g=\ngithub.com/tonistiigi/dchapes-mode v0.0.0-20250318174251-73d941a28323 h1:r0p7fK56l8WPequOaR3i9LBqfPtEdXIQbUTzT55iqT4=\ngithub.com/tonistiigi/dchapes-mode v0.0.0-20250318174251-73d941a28323/go.mod h1:3Iuxbr0P7D3zUzBMAZB+ois3h/et0shEz0qApgHYGpY=\ngithub.com/tonistiigi/fsutil v0.0.0-20251211185533-a2aa163d723f h1:Z4NEQ86qFl1mHuCu9gwcE+EYCwDKfXAYXZbdIXyxmEA=\ngithub.com/tonistiigi/fsutil v0.0.0-20251211185533-a2aa163d723f/go.mod h1:BKdcez7BiVtBvIcef90ZPc6ebqIWr4JWD7+EvLm6J98=\ngithub.com/tonistiigi/go-csvvalue v0.0.0-20240814133006-030d3b2625d0 h1:2f304B10LaZdB8kkVEaoXvAMVan2tl9AiK4G0odjQtE=\ngithub.com/tonistiigi/go-csvvalue v0.0.0-20240814133006-030d3b2625d0/go.mod h1:278M4p8WsNh3n4a1eqiFcV2FGk7wE5fwUpUom9mK9lE=\ngithub.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea h1:SXhTLE6pb6eld/v/cCndK0AMpt1wiVFb/YYmqB3/QG0=\ngithub.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea/go.mod h1:WPnis/6cRcDZSUvVmezrxJPkiO87ThFYsoUiMwWNDJk=\ngithub.com/tonistiigi/vt100 v0.0.0-20240514184818-90bafcd6abab h1:H6aJ0yKQ0gF49Qb2z5hI1UHxSQt4JMyxebFR15KnApw=\ngithub.com/tonistiigi/vt100 v0.0.0-20240514184818-90bafcd6abab/go.mod h1:ulncasL3N9uLrVann0m+CDlJKWsIAP34MPcOJF6VRvc=\ngithub.com/transparency-dev/formats v0.0.0-20251017110053-404c0d5b696c h1:5a2XDQ2LiAUV+/RjckMyq9sXudfrPSuCY4FuPC1NyAw=\ngithub.com/transparency-dev/formats v0.0.0-20251017110053-404c0d5b696c/go.mod h1:g85IafeFJZLxlzZCDRu4JLpfS7HKzR+Hw9qRh3bVzDI=\ngithub.com/transparency-dev/merkle v0.0.2 h1:Q9nBoQcZcgPamMkGn7ghV8XiTZ/kRxn1yCG81+twTK4=\ngithub.com/transparency-dev/merkle v0.0.2/go.mod h1:pqSy+OXefQ1EDUVmAJ8MUhHB9TXGuzVAT58PqBoHz1A=\ngithub.com/vbatts/tar-split v0.12.2 h1:w/Y6tjxpeiFMR47yzZPlPj/FcPLpXbTUi/9H7d3CPa4=\ngithub.com/vbatts/tar-split v0.12.2/go.mod h1:eF6B6i6ftWQcDqEn3/iGFRFRo8cBIMSJVOpnNdfTMFA=\ngithub.com/xhit/go-str2duration/v2 v2.1.0 h1:lxklc02Drh6ynqX+DdPyp5pCKLUQpRT8bp8Ydu2Bstc=\ngithub.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU=\ngithub.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=\ngithub.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=\ngithub.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=\ngo.mongodb.org/mongo-driver v1.17.6 h1:87JUG1wZfWsr6rIz3ZmpH90rL5tea7O3IHuSwHUpsss=\ngo.mongodb.org/mongo-driver v1.17.6/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ=\ngo.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=\ngo.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=\ngo.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=\ngo.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=\ngo.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.63.0 h1:YH4g8lQroajqUwWbq/tr2QX1JFmEXaDLgG+ew9bLMWo=\ngo.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.63.0/go.mod h1:fvPi2qXDqFs8M4B4fmJhE92TyQs9Ydjlg3RvfUp+NbQ=\ngo.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.63.0 h1:2pn7OzMewmYRiNtv1doZnLo3gONcnMHlFnmOR8Vgt+8=\ngo.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace v0.63.0/go.mod h1:rjbQTDEPQymPE0YnRQp9/NuPwwtL0sesz/fnqRW/v84=\ngo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0 h1:RbKq8BG0FI8OiXhBfcRtqqHcZcka+gU3cskNuf05R18=\ngo.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0/go.mod h1:h06DGIukJOevXaj/xrNjhi/2098RZzcLTbc0jDAUbsg=\ngo.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=\ngo.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=\ngo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.38.0 h1:vl9obrcoWVKp/lwl8tRE33853I8Xru9HFbw/skNeLs8=\ngo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.38.0/go.mod h1:GAXRxmLJcVM3u22IjTg74zWBrRCKq8BnOqUVLodpcpw=\ngo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.38.0 h1:Oe2z/BCg5q7k4iXC3cqJxKYg0ieRiOqF0cecFYdPTwk=\ngo.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.38.0/go.mod h1:ZQM5lAJpOsKnYagGg/zV2krVqTtaVdYdDkhMoX6Oalg=\ngo.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 h1:GqRJVj7UmLjCVyVJ3ZFLdPRmhDUp2zFmQe3RHIOsw24=\ngo.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0/go.mod h1:ri3aaHSmCTVYu2AWv44YMauwAQc0aqI9gHKIcSbI1pU=\ngo.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0 h1:lwI4Dc5leUqENgGuQImwLo4WnuXFPetmPpkLi2IrX54=\ngo.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0/go.mod h1:Kz/oCE7z5wuyhPxsXDuaPteSWqjSBD5YaSdbxZYGbGk=\ngo.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 h1:aTL7F04bJHUlztTsNGJ2l+6he8c+y/b//eR0jjjemT4=\ngo.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0/go.mod h1:kldtb7jDTeol0l3ewcmd8SDvx3EmIE7lyvqbasU3QC4=\ngo.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=\ngo.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=\ngo.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=\ngo.opentelemetry.io/otel/sdk v1.38.0/go.mod h1:ghmNdGlVemJI3+ZB5iDEuk4bWA3GkTpW+DOoZMYBVVg=\ngo.opentelemetry.io/otel/sdk/metric v1.38.0 h1:aSH66iL0aZqo//xXzQLYozmWrXxyFkBJ6qT5wthqPoM=\ngo.opentelemetry.io/otel/sdk/metric v1.38.0/go.mod h1:dg9PBnW9XdQ1Hd6ZnRz689CbtrUp0wMMs9iPcgT9EZA=\ngo.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=\ngo.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=\ngo.opentelemetry.io/proto/otlp v1.7.1 h1:gTOMpGDb0WTBOP8JaO72iL3auEZhVmAQg4ipjOVAtj4=\ngo.opentelemetry.io/proto/otlp v1.7.1/go.mod h1:b2rVh6rfI/s2pHWNlB7ILJcRALpcNDzKhACevjI+ZnE=\ngo.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=\ngo.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=\ngo.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=\ngo.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU=\ngo.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=\ngo.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=\ngo.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=\ngo.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=\ngo.yaml.in/yaml/v4 v4.0.0-rc.4 h1:UP4+v6fFrBIb1l934bDl//mmnoIZEDK0idg1+AIvX5U=\ngo.yaml.in/yaml/v4 v4.0.0-rc.4/go.mod h1:aZqd9kCMsGL7AuUv/m/PvWLdg5sjJsZ4oHDEnfPPfY0=\ngolang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=\ngolang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=\ngolang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=\ngolang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=\ngolang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=\ngolang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=\ngolang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=\ngolang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=\ngolang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=\ngolang.org/x/mod v0.31.0 h1:HaW9xtz0+kOcWKwli0ZXy79Ix+UW/vOfmWI5QVd2tgI=\ngolang.org/x/mod v0.31.0/go.mod h1:43JraMp9cGx1Rx3AqioxrbrhNsLl2l/iNAvuBkrezpg=\ngolang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=\ngolang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=\ngolang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=\ngolang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=\ngolang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=\ngolang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=\ngolang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=\ngolang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=\ngolang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=\ngolang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=\ngolang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20210331175145-43e1dd70ce54/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=\ngolang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=\ngolang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=\ngolang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=\ngolang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=\ngolang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q=\ngolang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg=\ngolang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=\ngolang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=\ngolang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=\ngolang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=\ngolang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=\ngolang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=\ngolang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=\ngolang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=\ngolang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=\ngolang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=\ngolang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=\ngolang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=\ngolang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=\ngolang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=\ngonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=\ngoogle.golang.org/genproto/googleapis/api v0.0.0-20251103181224-f26f9409b101 h1:vk5TfqZHNn0obhPIYeS+cxIFKFQgser/M2jnI+9c6MM=\ngoogle.golang.org/genproto/googleapis/api v0.0.0-20251103181224-f26f9409b101/go.mod h1:E17fc4PDhkr22dE3RgnH2hEubUaky6ZwW4VhANxyspg=\ngoogle.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 h1:tRPGkdGHuewF4UisLzzHHr1spKw92qLM98nIzxbC0wY=\ngoogle.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=\ngoogle.golang.org/grpc v1.78.0 h1:K1XZG/yGDJnzMdd/uZHAkVqJE+xIDOcmdSFZkBUicNc=\ngoogle.golang.org/grpc v1.78.0/go.mod h1:I47qjTo4OKbMkjA/aOOwxDIiPSBofUtQUI5EfpWvW7U=\ngoogle.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=\ngoogle.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=\ngopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=\ngopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=\ngopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=\ngopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\ngopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=\ngopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\ngotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=\ngotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=\npgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=\npgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=\ntags.cncf.io/container-device-interface v1.1.0 h1:RnxNhxF1JOu6CJUVpetTYvrXHdxw9j9jFYgZpI+anSY=\ntags.cncf.io/container-device-interface v1.1.0/go.mod h1:76Oj0Yqp9FwTx/pySDc8Bxjpg+VqXfDb50cKAXVJ34Q=\n"
  },
  {
    "path": "internal/desktop/client.go",
    "content": "/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage desktop\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\t\"strings\"\n\n\t\"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp\"\n\n\t\"github.com/docker/compose/v5/internal\"\n\t\"github.com/docker/compose/v5/internal/memnet\"\n)\n\n// identify this client in the logs\nvar userAgent = \"compose/\" + internal.Version\n\n// Client for integration with Docker Desktop features.\ntype Client struct {\n\tapiEndpoint string\n\tclient      *http.Client\n}\n\n// NewClient creates a Desktop integration client for the provided in-memory\n// socket address (AF_UNIX or named pipe).\nfunc NewClient(apiEndpoint string) *Client {\n\tvar transport http.RoundTripper = &http.Transport{\n\t\tDisableCompression: true,\n\t\tDialContext: func(ctx context.Context, _, _ string) (net.Conn, error) {\n\t\t\treturn memnet.DialEndpoint(ctx, apiEndpoint)\n\t\t},\n\t}\n\ttransport = otelhttp.NewTransport(transport)\n\n\treturn &Client{\n\t\tapiEndpoint: apiEndpoint,\n\t\tclient:      &http.Client{Transport: transport},\n\t}\n}\n\nfunc (c *Client) Endpoint() string {\n\treturn c.apiEndpoint\n}\n\n// Close releases any open connections.\nfunc (c *Client) Close() error {\n\tc.client.CloseIdleConnections()\n\treturn nil\n}\n\ntype PingResponse struct {\n\tServerTime int64 `json:\"serverTime\"`\n}\n\n// Ping is a minimal API used to ensure that the server is available.\nfunc (c *Client) Ping(ctx context.Context) (*PingResponse, error) {\n\treq, err := c.newRequest(ctx, http.MethodGet, \"/ping\", http.NoBody)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tresp, err := c.client.Do(req)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer func() {\n\t\t_ = resp.Body.Close()\n\t}()\n\n\tif resp.StatusCode != http.StatusOK {\n\t\treturn nil, fmt.Errorf(\"unexpected status code: %d\", resp.StatusCode)\n\t}\n\n\tvar ret PingResponse\n\tif err := json.NewDecoder(resp.Body).Decode(&ret); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &ret, nil\n}\n\ntype FeatureFlagResponse map[string]FeatureFlagValue\n\ntype FeatureFlagValue struct {\n\tEnabled bool `json:\"enabled\"`\n}\n\nfunc (c *Client) FeatureFlags(ctx context.Context) (FeatureFlagResponse, error) {\n\treq, err := c.newRequest(ctx, http.MethodGet, \"/features\", http.NoBody)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tresp, err := c.client.Do(req)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer func() {\n\t\t_ = resp.Body.Close()\n\t}()\n\n\tif resp.StatusCode != http.StatusOK {\n\t\treturn nil, fmt.Errorf(\"unexpected status code: %d\", resp.StatusCode)\n\t}\n\n\tvar ret FeatureFlagResponse\n\tif err := json.NewDecoder(resp.Body).Decode(&ret); err != nil {\n\t\treturn nil, err\n\t}\n\treturn ret, nil\n}\n\nfunc (c *Client) newRequest(ctx context.Context, method, path string, body io.Reader) (*http.Request, error) {\n\treq, err := http.NewRequestWithContext(ctx, method, backendURL(path), body)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treq.Header.Set(\"User-Agent\", userAgent)\n\treturn req, nil\n}\n\n// backendURL generates a URL for the given API path.\n//\n// NOTE: Custom transport handles communication. The host is to create a valid\n// URL for the Go http.Client that is also descriptive in error/logs.\nfunc backendURL(path string) string {\n\treturn \"http://docker-desktop/\" + strings.TrimPrefix(path, \"/\")\n}\n"
  },
  {
    "path": "internal/desktop/client_test.go",
    "content": "/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage desktop\n\nimport (\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc TestClientPing(t *testing.T) {\n\tif testing.Short() {\n\t\tt.Skip(\"Skipped in short mode - test connects to Docker Desktop\")\n\t}\n\tdesktopEndpoint := os.Getenv(\"COMPOSE_TEST_DESKTOP_ENDPOINT\")\n\tif desktopEndpoint == \"\" {\n\t\tt.Skip(\"Skipping - COMPOSE_TEST_DESKTOP_ENDPOINT not defined\")\n\t}\n\n\tclient := NewClient(desktopEndpoint)\n\tt.Cleanup(func() {\n\t\t_ = client.Close()\n\t})\n\n\tnow := time.Now()\n\n\tret, err := client.Ping(t.Context())\n\trequire.NoError(t, err)\n\n\tserverTime := time.Unix(0, ret.ServerTime)\n\trequire.True(t, now.Before(serverTime))\n}\n"
  },
  {
    "path": "internal/experimental/experimental.go",
    "content": "/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage experimental\n\nimport (\n\t\"context\"\n\t\"os\"\n\t\"strconv\"\n\n\t\"github.com/docker/compose/v5/internal/desktop\"\n)\n\n// envComposeExperimentalGlobal can be set to a falsy value (e.g. 0, false) to\n// globally opt-out of any experimental features in Compose.\nconst envComposeExperimentalGlobal = \"COMPOSE_EXPERIMENTAL\"\n\n// State of experiments (enabled/disabled) based on environment and local config.\ntype State struct {\n\t// active is false if experiments have been opted-out of globally.\n\tactive        bool\n\tdesktopValues desktop.FeatureFlagResponse\n}\n\nfunc NewState() *State {\n\t// experimental features have individual controls, but users can opt out\n\t// of ALL experiments easily if desired\n\texperimentsActive := true\n\tif v := os.Getenv(envComposeExperimentalGlobal); v != \"\" {\n\t\texperimentsActive, _ = strconv.ParseBool(v)\n\t}\n\treturn &State{\n\t\tactive: experimentsActive,\n\t}\n}\n\nfunc (s *State) Load(ctx context.Context, client *desktop.Client) error {\n\tif !s.active {\n\t\t// user opted out of experiments globally, no need to load state from\n\t\t// Desktop\n\t\treturn nil\n\t}\n\n\tif client == nil {\n\t\t// not running under Docker Desktop\n\t\treturn nil\n\t}\n\n\tdesktopValues, err := client.FeatureFlags(ctx)\n\tif err != nil {\n\t\treturn err\n\t}\n\ts.desktopValues = desktopValues\n\treturn nil\n}\n"
  },
  {
    "path": "internal/locker/pidfile.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage locker\n\nimport (\n\t\"fmt\"\n\t\"path/filepath\"\n)\n\ntype Pidfile struct {\n\tpath string\n}\n\nfunc NewPidfile(projectName string) (*Pidfile, error) {\n\trun, err := runDir()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tpath := filepath.Join(run, fmt.Sprintf(\"%s.pid\", projectName))\n\treturn &Pidfile{path: path}, nil\n}\n"
  },
  {
    "path": "internal/locker/pidfile_unix.go",
    "content": "//go:build !windows\n\n/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage locker\n\nimport (\n\t\"os\"\n\n\t\"github.com/docker/docker/pkg/pidfile\"\n)\n\nfunc (f *Pidfile) Lock() error {\n\treturn pidfile.Write(f.path, os.Getpid())\n}\n"
  },
  {
    "path": "internal/locker/pidfile_windows.go",
    "content": "//go:build windows\n\n/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage locker\n\nimport (\n\t\"os\"\n\n\t\"github.com/docker/docker/pkg/pidfile\"\n\t\"github.com/mitchellh/go-ps\"\n)\n\nfunc (f *Pidfile) Lock() error {\n\tnewPID := os.Getpid()\n\terr := pidfile.Write(f.path, newPID)\n\tif err != nil {\n\t\t// Get PID registered in the file\n\t\tpid, errPid := pidfile.Read(f.path)\n\t\tif errPid != nil {\n\t\t\treturn err\n\t\t}\n\t\t// Some users faced issues on Windows where the process written in the pidfile was identified as still existing\n\t\t// So we used a 2nd process library to verify if this not a false positive feedback\n\t\t// Check if the process exists\n\t\tprocess, errPid := ps.FindProcess(pid)\n\t\tif process == nil && errPid == nil {\n\t\t\t// If the process does not exist, remove the pidfile and try to lock again\n\t\t\t_ = os.Remove(f.path)\n\t\t\treturn pidfile.Write(f.path, newPID)\n\t\t}\n\t}\n\treturn err\n}\n"
  },
  {
    "path": "internal/locker/runtime.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage locker\n\nimport (\n\t\"os\"\n)\n\nfunc runDir() (string, error) {\n\trun, ok := os.LookupEnv(\"XDG_RUNTIME_DIR\")\n\tif ok {\n\t\treturn run, nil\n\t}\n\n\tpath, err := osDependentRunDir()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\terr = os.MkdirAll(path, 0o700)\n\treturn path, err\n}\n"
  },
  {
    "path": "internal/locker/runtime_darwin.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage locker\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n)\n\n// Based on https://github.com/adrg/xdg\n// Licensed under MIT License (MIT)\n// Copyright (c) 2014 Adrian-George Bostan <adrg@epistack.com>\n\nfunc osDependentRunDir() (string, error) {\n\thome, err := os.UserHomeDir()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn filepath.Join(home, \"Library\", \"Application Support\", \"com.docker.compose\"), nil\n}\n"
  },
  {
    "path": "internal/locker/runtime_unix.go",
    "content": "//go:build linux || openbsd || freebsd\n\n/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage locker\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"strconv\"\n)\n\n// Based on https://github.com/adrg/xdg\n// Licensed under MIT License (MIT)\n// Copyright (c) 2014 Adrian-George Bostan <adrg@epistack.com>\n\nfunc osDependentRunDir() (string, error) {\n\trun := filepath.Join(\"run\", \"user\", strconv.Itoa(os.Getuid()))\n\tif _, err := os.Stat(run); err == nil {\n\t\treturn run, nil\n\t}\n\n\t// /run/user/$uid is set by pam_systemd, but might not be present, especially in containerized environments\n\thome, err := os.UserHomeDir()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn filepath.Join(home, \".docker\", \"docker-compose\"), nil\n}\n"
  },
  {
    "path": "internal/locker/runtime_windows.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage locker\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\n\t\"golang.org/x/sys/windows\"\n)\n\n// Based on https://github.com/adrg/xdg\n// Licensed under MIT License (MIT)\n// Copyright (c) 2014 Adrian-George Bostan <adrg@epistack.com>\n\nfunc osDependentRunDir() (string, error) {\n\tflags := []uint32{windows.KF_FLAG_DEFAULT, windows.KF_FLAG_DEFAULT_PATH}\n\tfor _, flag := range flags {\n\t\tp, _ := windows.KnownFolderPath(windows.FOLDERID_LocalAppData, flag|windows.KF_FLAG_DONT_VERIFY)\n\t\tif p != \"\" {\n\t\t\treturn filepath.Join(p, \"docker-compose\"), nil\n\t\t}\n\t}\n\n\tappData, ok := os.LookupEnv(\"LOCALAPPDATA\")\n\tif ok {\n\t\treturn filepath.Join(appData, \"docker-compose\"), nil\n\t}\n\n\thome, err := os.UserHomeDir()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn filepath.Join(home, \"AppData\", \"Local\", \"docker-compose\"), nil\n}\n"
  },
  {
    "path": "internal/memnet/conn.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage memnet\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"strings\"\n)\n\nfunc DialEndpoint(ctx context.Context, endpoint string) (net.Conn, error) {\n\tif addr, ok := strings.CutPrefix(endpoint, \"unix://\"); ok {\n\t\treturn Dial(ctx, \"unix\", addr)\n\t}\n\tif addr, ok := strings.CutPrefix(endpoint, \"npipe://\"); ok {\n\t\treturn Dial(ctx, \"npipe\", addr)\n\t}\n\treturn nil, fmt.Errorf(\"unsupported protocol for address: %s\", endpoint)\n}\n\nfunc Dial(ctx context.Context, network, addr string) (net.Conn, error) {\n\tvar d net.Dialer\n\tswitch network {\n\tcase \"unix\":\n\t\tif err := validateSocketPath(addr); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn d.DialContext(ctx, \"unix\", addr)\n\tcase \"npipe\":\n\t\t// N.B. this will return an error on non-Windows\n\t\treturn dialNamedPipe(ctx, addr)\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"unsupported network: %s\", network)\n\t}\n}\n"
  },
  {
    "path": "internal/memnet/conn_unix.go",
    "content": "//go:build !windows\n\n/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage memnet\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"syscall\"\n)\n\nconst maxUnixSocketPathSize = len(syscall.RawSockaddrUnix{}.Path)\n\nfunc dialNamedPipe(_ context.Context, _ string) (net.Conn, error) {\n\treturn nil, fmt.Errorf(\"named pipes are only available on Windows\")\n}\n\nfunc validateSocketPath(addr string) error {\n\tif len(addr) > maxUnixSocketPathSize {\n\t\treturn fmt.Errorf(\"socket address is too long: %s\", addr)\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "internal/memnet/conn_windows.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage memnet\n\nimport (\n\t\"context\"\n\t\"net\"\n\n\t\"github.com/Microsoft/go-winio\"\n)\n\nfunc dialNamedPipe(ctx context.Context, addr string) (net.Conn, error) {\n\treturn winio.DialPipeContext(ctx, addr)\n}\n\nfunc validateSocketPath(addr string) error {\n\t// AF_UNIX sockets do not have strict path limits on Windows\n\treturn nil\n}\n"
  },
  {
    "path": "internal/oci/push.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage oci\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"path/filepath\"\n\t\"slices\"\n\t\"time\"\n\n\t\"github.com/containerd/containerd/v2/core/remotes\"\n\tpusherrors \"github.com/containerd/containerd/v2/core/remotes/errors\"\n\t\"github.com/distribution/reference\"\n\t\"github.com/opencontainers/go-digest\"\n\t\"github.com/opencontainers/image-spec/specs-go\"\n\tv1 \"github.com/opencontainers/image-spec/specs-go/v1\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nconst (\n\t// ComposeProjectArtifactType is the OCI 1.1-compliant artifact type value\n\t// for the generated image manifest.\n\tComposeProjectArtifactType = \"application/vnd.docker.compose.project\"\n\t// ComposeYAMLMediaType is the media type for each layer (Compose file)\n\t// in the image manifest.\n\tComposeYAMLMediaType = \"application/vnd.docker.compose.file+yaml\"\n\t// ComposeEmptyConfigMediaType is a media type used for the config descriptor\n\t// when doing OCI 1.0-style pushes.\n\t//\n\t// The content is always `{}`, the same as a normal empty descriptor, but\n\t// the specific media type allows clients to fall back to the config media\n\t// type to recognize the manifest as a Compose project since the artifact\n\t// type field is not available in OCI 1.0.\n\t//\n\t// This is based on guidance from the OCI 1.1 spec:\n\t//\t> Implementers note: artifacts have historically been created without\n\t// \t> an artifactType field, and tooling to work with artifacts should\n\t//\t> fallback to the config.mediaType value.\n\tComposeEmptyConfigMediaType = \"application/vnd.docker.compose.config.empty.v1+json\"\n\t// ComposeEnvFileMediaType is the media type for each Env File layer in the image manifest.\n\tComposeEnvFileMediaType = \"application/vnd.docker.compose.envfile\"\n)\n\n// clientAuthStatusCodes are client (4xx) errors that are authentication\n// related.\nvar clientAuthStatusCodes = []int{\n\thttp.StatusUnauthorized,\n\thttp.StatusForbidden,\n\thttp.StatusProxyAuthRequired,\n}\n\nfunc DescriptorForComposeFile(path string, content []byte) v1.Descriptor {\n\treturn v1.Descriptor{\n\t\tMediaType: ComposeYAMLMediaType,\n\t\tDigest:    digest.FromString(string(content)),\n\t\tSize:      int64(len(content)),\n\t\tAnnotations: map[string]string{\n\t\t\t\"com.docker.compose.version\": api.ComposeVersion,\n\t\t\t\"com.docker.compose.file\":    filepath.Base(path),\n\t\t},\n\t\tData: content,\n\t}\n}\n\nfunc DescriptorForEnvFile(path string, content []byte) v1.Descriptor {\n\treturn v1.Descriptor{\n\t\tMediaType: ComposeEnvFileMediaType,\n\t\tDigest:    digest.FromString(string(content)),\n\t\tSize:      int64(len(content)),\n\t\tAnnotations: map[string]string{\n\t\t\t\"com.docker.compose.version\": api.ComposeVersion,\n\t\t\t\"com.docker.compose.envfile\": filepath.Base(path),\n\t\t},\n\t\tData: content,\n\t}\n}\n\nfunc PushManifest(ctx context.Context, resolver remotes.Resolver, named reference.Named, layers []v1.Descriptor, ociVersion api.OCIVersion) (v1.Descriptor, error) {\n\t// Check if we need an extra empty layer for the manifest config\n\tif ociVersion == api.OCIVersion1_1 || ociVersion == \"\" {\n\t\terr := push(ctx, resolver, named, v1.DescriptorEmptyJSON)\n\t\tif err != nil {\n\t\t\treturn v1.Descriptor{}, err\n\t\t}\n\t}\n\t// prepare to push the manifest by pushing the layers\n\tlayerDescriptors := make([]v1.Descriptor, len(layers))\n\tfor i := range layers {\n\t\tlayerDescriptors[i] = layers[i]\n\t\tif err := push(ctx, resolver, named, layers[i]); err != nil {\n\t\t\treturn v1.Descriptor{}, err\n\t\t}\n\t}\n\n\tif ociVersion != \"\" {\n\t\t// if a version was explicitly specified, use it\n\t\treturn createAndPushManifest(ctx, resolver, named, layerDescriptors, ociVersion)\n\t}\n\n\t// try to push in the OCI 1.1 format but fallback to OCI 1.0 on 4xx errors\n\t// (other than auth) since it's most likely the result of the registry not\n\t// having support\n\tdescriptor, err := createAndPushManifest(ctx, resolver, named, layerDescriptors, api.OCIVersion1_1)\n\tvar pushErr pusherrors.ErrUnexpectedStatus\n\tif errors.As(err, &pushErr) && isNonAuthClientError(pushErr.StatusCode) {\n\t\t// TODO(milas): show a warning here (won't work with logrus)\n\t\treturn createAndPushManifest(ctx, resolver, named, layerDescriptors, api.OCIVersion1_0)\n\t}\n\treturn descriptor, err\n}\n\nfunc push(ctx context.Context, resolver remotes.Resolver, ref reference.Named, descriptor v1.Descriptor) error {\n\tfullRef, err := reference.WithDigest(reference.TagNameOnly(ref), descriptor.Digest)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn Push(ctx, resolver, fullRef, descriptor)\n}\n\nfunc createAndPushManifest(ctx context.Context, resolver remotes.Resolver, named reference.Named, layers []v1.Descriptor, ociVersion api.OCIVersion) (v1.Descriptor, error) {\n\tdescriptor, toPush, err := generateManifest(layers, ociVersion)\n\tif err != nil {\n\t\treturn v1.Descriptor{}, err\n\t}\n\tfor _, p := range toPush {\n\t\terr = push(ctx, resolver, named, p)\n\t\tif err != nil {\n\t\t\treturn v1.Descriptor{}, err\n\t\t}\n\t}\n\treturn descriptor, nil\n}\n\nfunc isNonAuthClientError(statusCode int) bool {\n\tif statusCode < 400 || statusCode >= 500 {\n\t\t// not a client error\n\t\treturn false\n\t}\n\treturn !slices.Contains(clientAuthStatusCodes, statusCode)\n}\n\nfunc generateManifest(layers []v1.Descriptor, ociCompat api.OCIVersion) (v1.Descriptor, []v1.Descriptor, error) {\n\tvar toPush []v1.Descriptor\n\tvar config v1.Descriptor\n\tvar artifactType string\n\tswitch ociCompat {\n\tcase api.OCIVersion1_0:\n\t\t// \"Content other than OCI container images MAY be packaged using the image manifest.\n\t\t// When this is done, the config.mediaType value MUST be set to a value specific to\n\t\t// the artifact type or the empty value.\"\n\t\t// Source: https://github.com/opencontainers/image-spec/blob/main/manifest.md#guidelines-for-artifact-usage\n\t\t//\n\t\t// The `ComposeEmptyConfigMediaType` is used specifically for this purpose:\n\t\t// there is no config, and an empty descriptor is used for OCI 1.1 in\n\t\t// conjunction with the `ArtifactType`, but for OCI 1.0 compatibility,\n\t\t// tooling falls back to the config media type, so this is used to\n\t\t// indicate that it's not a container image but custom content.\n\t\tconfigData := []byte(\"{}\")\n\t\tconfig = v1.Descriptor{\n\t\t\tMediaType: ComposeEmptyConfigMediaType,\n\t\t\tDigest:    digest.FromBytes(configData),\n\t\t\tSize:      int64(len(configData)),\n\t\t\tData:      configData,\n\t\t}\n\t\t// N.B. OCI 1.0 does NOT support specifying the artifact type, so it's\n\t\t//\t\tleft as an empty string to omit it from the marshaled JSON\n\t\tartifactType = \"\"\n\t\ttoPush = append(toPush, config)\n\tcase api.OCIVersion1_1:\n\t\tconfig = v1.DescriptorEmptyJSON\n\t\tartifactType = ComposeProjectArtifactType\n\t\ttoPush = append(toPush, config)\n\tdefault:\n\t\treturn v1.Descriptor{}, nil, fmt.Errorf(\"unsupported OCI version: %s\", ociCompat)\n\t}\n\n\tmanifest, err := json.Marshal(v1.Manifest{\n\t\tVersioned:    specs.Versioned{SchemaVersion: 2},\n\t\tMediaType:    v1.MediaTypeImageManifest,\n\t\tArtifactType: artifactType,\n\t\tConfig:       config,\n\t\tLayers:       layers,\n\t\tAnnotations: map[string]string{\n\t\t\t\"org.opencontainers.image.created\": time.Now().Format(time.RFC3339),\n\t\t},\n\t})\n\tif err != nil {\n\t\treturn v1.Descriptor{}, nil, err\n\t}\n\n\tmanifestDescriptor := v1.Descriptor{\n\t\tMediaType: v1.MediaTypeImageManifest,\n\t\tDigest:    digest.FromString(string(manifest)),\n\t\tSize:      int64(len(manifest)),\n\t\tAnnotations: map[string]string{\n\t\t\t\"com.docker.compose.version\": api.ComposeVersion,\n\t\t},\n\t\tArtifactType: artifactType,\n\t\tData:         manifest,\n\t}\n\ttoPush = append(toPush, manifestDescriptor)\n\treturn manifestDescriptor, toPush, nil\n}\n"
  },
  {
    "path": "internal/oci/resolver.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage oci\n\nimport (\n\t\"context\"\n\t\"io\"\n\t\"net/url\"\n\t\"slices\"\n\t\"strings\"\n\n\t\"github.com/containerd/containerd/v2/core/remotes\"\n\t\"github.com/containerd/containerd/v2/core/remotes/docker\"\n\t\"github.com/containerd/containerd/v2/pkg/labels\"\n\t\"github.com/containerd/errdefs\"\n\t\"github.com/distribution/reference\"\n\t\"github.com/docker/cli/cli/config/configfile\"\n\t\"github.com/moby/buildkit/util/contentutil\"\n\tspec \"github.com/opencontainers/image-spec/specs-go/v1\"\n\n\t\"github.com/docker/compose/v5/internal/registry\"\n)\n\n// NewResolver setup an OCI Resolver based on docker/cli config to provide registry credentials\nfunc NewResolver(config *configfile.ConfigFile, insecureRegistries ...string) remotes.Resolver {\n\treturn docker.NewResolver(docker.ResolverOptions{\n\t\tHosts: docker.ConfigureDefaultRegistries(\n\t\t\tdocker.WithAuthorizer(docker.NewDockerAuthorizer(\n\t\t\t\tdocker.WithAuthCreds(func(host string) (string, string, error) {\n\t\t\t\t\thost = registry.GetAuthConfigKey(host)\n\t\t\t\t\tauth, err := config.GetAuthConfig(host)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn \"\", \"\", err\n\t\t\t\t\t}\n\t\t\t\t\tif auth.IdentityToken != \"\" {\n\t\t\t\t\t\treturn \"\", auth.IdentityToken, nil\n\t\t\t\t\t}\n\t\t\t\t\treturn auth.Username, auth.Password, nil\n\t\t\t\t}),\n\t\t\t)),\n\t\t\tdocker.WithPlainHTTP(func(domain string) (bool, error) {\n\t\t\t\t// Should be used for testing **only**\n\t\t\t\treturn slices.Contains(insecureRegistries, domain), nil\n\t\t\t}),\n\t\t),\n\t})\n}\n\n// Get retrieves a Named OCI resource and returns OCI Descriptor and Manifest\nfunc Get(ctx context.Context, resolver remotes.Resolver, ref reference.Named) (spec.Descriptor, []byte, error) {\n\t_, descriptor, err := resolver.Resolve(ctx, ref.String())\n\tif err != nil {\n\t\treturn spec.Descriptor{}, nil, err\n\t}\n\n\tfetcher, err := resolver.Fetcher(ctx, ref.String())\n\tif err != nil {\n\t\treturn spec.Descriptor{}, nil, err\n\t}\n\tfetch, err := fetcher.Fetch(ctx, descriptor)\n\tif err != nil {\n\t\treturn spec.Descriptor{}, nil, err\n\t}\n\tcontent, err := io.ReadAll(fetch)\n\tif err != nil {\n\t\treturn spec.Descriptor{}, nil, err\n\t}\n\treturn descriptor, content, nil\n}\n\nfunc Copy(ctx context.Context, resolver remotes.Resolver, image reference.Named, named reference.Named) (spec.Descriptor, error) {\n\tsrc, desc, err := resolver.Resolve(ctx, image.String())\n\tif err != nil {\n\t\treturn spec.Descriptor{}, err\n\t}\n\tif desc.Annotations == nil {\n\t\tdesc.Annotations = make(map[string]string)\n\t}\n\t// set LabelDistributionSource so push will actually use a registry mount\n\trefspec := reference.TrimNamed(image).String()\n\tu, err := url.Parse(\"dummy://\" + refspec)\n\tif err != nil {\n\t\treturn spec.Descriptor{}, err\n\t}\n\tsource, repo := u.Hostname(), strings.TrimPrefix(u.Path, \"/\")\n\tdesc.Annotations[labels.LabelDistributionSource+\".\"+source] = repo\n\n\tp, err := resolver.Pusher(ctx, named.Name())\n\tif err != nil {\n\t\treturn spec.Descriptor{}, err\n\t}\n\tf, err := resolver.Fetcher(ctx, src)\n\tif err != nil {\n\t\treturn spec.Descriptor{}, err\n\t}\n\n\terr = contentutil.CopyChain(ctx,\n\t\tcontentutil.FromPusher(p),\n\t\tcontentutil.FromFetcher(f), desc)\n\treturn desc, err\n}\n\nfunc Push(ctx context.Context, resolver remotes.Resolver, ref reference.Named, descriptor spec.Descriptor) error {\n\tpusher, err := resolver.Pusher(ctx, ref.String())\n\tif err != nil {\n\t\treturn err\n\t}\n\tctx = remotes.WithMediaTypeKeyPrefix(ctx, ComposeYAMLMediaType, \"artifact-\")\n\tctx = remotes.WithMediaTypeKeyPrefix(ctx, ComposeEnvFileMediaType, \"artifact-\")\n\tctx = remotes.WithMediaTypeKeyPrefix(ctx, ComposeEmptyConfigMediaType, \"config-\")\n\tctx = remotes.WithMediaTypeKeyPrefix(ctx, spec.MediaTypeEmptyJSON, \"config-\")\n\n\tpush, err := pusher.Push(ctx, descriptor)\n\tif errdefs.IsAlreadyExists(err) {\n\t\treturn nil\n\t}\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t_, err = push.Write(descriptor.Data)\n\tif err != nil {\n\t\t// Close the writer on error since Commit won't be called\n\t\t_ = push.Close()\n\t\treturn err\n\t}\n\t// Commit will close the writer\n\treturn push.Commit(ctx, int64(len(descriptor.Data)), descriptor.Digest)\n}\n"
  },
  {
    "path": "internal/paths/paths.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage paths\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n)\n\nfunc IsChild(dir string, file string) bool {\n\tif dir == \"\" {\n\t\treturn false\n\t}\n\n\tdir = filepath.Clean(dir)\n\tcurrent := filepath.Clean(file)\n\tchild := \".\"\n\tfor {\n\t\tif strings.EqualFold(dir, current) {\n\t\t\t// If the two paths are exactly equal, then they must be the same.\n\t\t\tif dir == current {\n\t\t\t\treturn true\n\t\t\t}\n\n\t\t\t// If the two paths are equal under case-folding, but not exactly equal,\n\t\t\t// then the only way to check if they're truly \"equal\" is to check\n\t\t\t// to see if we're on a case-insensitive file system.\n\t\t\t//\n\t\t\t// This is a notoriously tricky problem. See how dep solves it here:\n\t\t\t// https://github.com/golang/dep/blob/v0.5.4/internal/fs/fs.go#L33\n\t\t\t//\n\t\t\t// because you can mount case-sensitive filesystems onto case-insensitive\n\t\t\t// file-systems, and vice versa :scream:\n\t\t\t//\n\t\t\t// We want to do as much of this check as possible with strings-only\n\t\t\t// (to avoid a file system read and error handling), so we only\n\t\t\t// do this check if we have no other choice.\n\t\t\tdirInfo, err := os.Stat(dir)\n\t\t\tif err != nil {\n\t\t\t\treturn false\n\t\t\t}\n\n\t\t\tcurrentInfo, err := os.Stat(current)\n\t\t\tif err != nil {\n\t\t\t\treturn false\n\t\t\t}\n\n\t\t\tif !os.SameFile(dirInfo, currentInfo) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\treturn true\n\t\t}\n\n\t\tif len(current) <= len(dir) || current == \".\" {\n\t\t\treturn false\n\t\t}\n\n\t\tcDir := filepath.Dir(current)\n\t\tcBase := filepath.Base(current)\n\t\tchild = filepath.Join(cBase, child)\n\t\tcurrent = cDir\n\t}\n}\n\n// EncompassingPaths returns the minimal set of paths that root all paths\n// from the original collection.\n//\n// For example, [\"/foo\", \"/foo/bar\", \"/foo\", \"/baz\"] -> [\"/foo\", \"/baz].\nfunc EncompassingPaths(paths []string) []string {\n\tresult := []string{}\n\tfor _, current := range paths {\n\t\tisCovered := false\n\t\thasRemovals := false\n\n\t\tfor i, existing := range result {\n\t\t\tif IsChild(existing, current) {\n\t\t\t\t// The path is already covered, so there's no need to include it\n\t\t\t\tisCovered = true\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\tif IsChild(current, existing) {\n\t\t\t\t// Mark the element empty for removal.\n\t\t\t\tresult[i] = \"\"\n\t\t\t\thasRemovals = true\n\t\t\t}\n\t\t}\n\n\t\tif !isCovered {\n\t\t\tresult = append(result, current)\n\t\t}\n\n\t\tif hasRemovals {\n\t\t\t// Remove all the empties\n\t\t\tnewResult := []string{}\n\t\t\tfor _, r := range result {\n\t\t\t\tif r != \"\" {\n\t\t\t\t\tnewResult = append(newResult, r)\n\t\t\t\t}\n\t\t\t}\n\t\t\tresult = newResult\n\t\t}\n\t}\n\treturn result\n}\n"
  },
  {
    "path": "internal/registry/registry.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage registry\n\nconst (\n\t// DefaultNamespace is the default namespace\n\tDefaultNamespace = \"docker.io\"\n\t// DefaultRegistryHost is the hostname for the default (Docker Hub) registry\n\t// used for pushing and pulling images. This hostname is hard-coded to handle\n\t// the conversion from image references without registry name (e.g. \"ubuntu\",\n\t// or \"ubuntu:latest\"), as well as references using the \"docker.io\" domain\n\t// name, which is used as canonical reference for images on Docker Hub, but\n\t// does not match the domain-name of Docker Hub's registry.\n\tDefaultRegistryHost = \"registry-1.docker.io\"\n\t// IndexHostname is the index hostname, used for authentication and image search.\n\tIndexHostname = \"index.docker.io\"\n\t// IndexServer is used for user auth and image search\n\tIndexServer = \"https://\" + IndexHostname + \"/v1/\"\n\t// IndexName is the name of the index\n\tIndexName = \"docker.io\"\n)\n\n// GetAuthConfigKey special-cases using the full index address of the official\n// index as the AuthConfig key, and uses the (host)name[:port] for private indexes.\nfunc GetAuthConfigKey(indexName string) string {\n\tif indexName == IndexName || indexName == IndexHostname || indexName == DefaultRegistryHost {\n\t\treturn IndexServer\n\t}\n\treturn indexName\n}\n"
  },
  {
    "path": "internal/sync/shared.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n       http://www.apache.org/licenses/LICENSE-2.0\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage sync\n\nimport (\n\t\"context\"\n)\n\n// PathMapping contains the Compose service and modified host system path.\ntype PathMapping struct {\n\t// HostPath that was created/modified/deleted outside the container.\n\t//\n\t// This is the path as seen from the user's perspective, e.g.\n\t// \t- C:\\Users\\moby\\Documents\\hello-world\\main.go (file on Windows)\n\t//  - /Users/moby/Documents/hello-world (directory on macOS)\n\tHostPath string\n\t// ContainerPath for the target file inside the container (only populated\n\t// for sync events, not rebuild).\n\t//\n\t// This is the path as used in Docker CLI commands, e.g.\n\t//\t- /workdir/main.go\n\t//  - /workdir/subdir\n\tContainerPath string\n}\n\ntype Syncer interface {\n\tSync(ctx context.Context, service string, paths []*PathMapping) error\n}\n"
  },
  {
    "path": "internal/sync/tar.go",
    "content": "/*\n   Copyright 2018 The Tilt Dev Authors\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage sync\n\nimport (\n\t\"archive/tar\"\n\t\"bytes\"\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"github.com/moby/go-archive\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"golang.org/x/sync/errgroup\"\n)\n\ntype archiveEntry struct {\n\tpath   string\n\tinfo   os.FileInfo\n\theader *tar.Header\n}\n\ntype LowLevelClient interface {\n\tContainersForService(ctx context.Context, projectName string, serviceName string) ([]container.Summary, error)\n\n\tExec(ctx context.Context, containerID string, cmd []string, in io.Reader) error\n\tUntar(ctx context.Context, id string, reader io.ReadCloser) error\n}\n\ntype Tar struct {\n\tclient LowLevelClient\n\n\tprojectName string\n}\n\nvar _ Syncer = &Tar{}\n\nfunc NewTar(projectName string, client LowLevelClient) *Tar {\n\treturn &Tar{\n\t\tprojectName: projectName,\n\t\tclient:      client,\n\t}\n}\n\nfunc (t *Tar) Sync(ctx context.Context, service string, paths []*PathMapping) error {\n\tcontainers, err := t.client.ContainersForService(ctx, t.projectName, service)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar pathsToCopy []PathMapping\n\tvar pathsToDelete []string\n\tfor _, p := range paths {\n\t\tif _, err := os.Stat(p.HostPath); err != nil && errors.Is(err, fs.ErrNotExist) {\n\t\t\tpathsToDelete = append(pathsToDelete, p.ContainerPath)\n\t\t} else {\n\t\t\tpathsToCopy = append(pathsToCopy, *p)\n\t\t}\n\t}\n\n\tvar deleteCmd []string\n\tif len(pathsToDelete) != 0 {\n\t\tdeleteCmd = append([]string{\"rm\", \"-rf\"}, pathsToDelete...)\n\t}\n\n\tvar (\n\t\teg    errgroup.Group\n\t\terrMu sync.Mutex\n\t\terrs  = make([]error, 0, len(containers)*2) // max 2 errs per container\n\t)\n\n\teg.SetLimit(16) // arbitrary limit, adjust to taste :D\n\tfor i := range containers {\n\t\tcontainerID := containers[i].ID\n\t\ttarReader := tarArchive(pathsToCopy)\n\n\t\teg.Go(func() error {\n\t\t\tif len(deleteCmd) != 0 {\n\t\t\t\tif err := t.client.Exec(ctx, containerID, deleteCmd, nil); err != nil {\n\t\t\t\t\terrMu.Lock()\n\t\t\t\t\terrs = append(errs, fmt.Errorf(\"deleting paths in %s: %w\", containerID, err))\n\t\t\t\t\terrMu.Unlock()\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif err := t.client.Untar(ctx, containerID, tarReader); err != nil {\n\t\t\t\terrMu.Lock()\n\t\t\t\terrs = append(errs, fmt.Errorf(\"copying files to %s: %w\", containerID, err))\n\t\t\t\terrMu.Unlock()\n\t\t\t}\n\t\t\treturn nil // don't fail-fast; collect all errors\n\t\t})\n\t}\n\n\t_ = eg.Wait()\n\treturn errors.Join(errs...)\n}\n\ntype ArchiveBuilder struct {\n\ttw *tar.Writer\n\t// A shared I/O buffer to help with file copying.\n\tcopyBuf *bytes.Buffer\n}\n\nfunc NewArchiveBuilder(writer io.Writer) *ArchiveBuilder {\n\ttw := tar.NewWriter(writer)\n\treturn &ArchiveBuilder{\n\t\ttw:      tw,\n\t\tcopyBuf: &bytes.Buffer{},\n\t}\n}\n\nfunc (a *ArchiveBuilder) Close() error {\n\treturn a.tw.Close()\n}\n\n// ArchivePathsIfExist creates a tar archive of all local files in `paths`. It quietly skips any paths that don't exist.\nfunc (a *ArchiveBuilder) ArchivePathsIfExist(paths []PathMapping) error {\n\t// In order to handle overlapping syncs, we\n\t// 1) collect all the entries,\n\t// 2) de-dupe them, with last-one-wins semantics\n\t// 3) write all the entries\n\t//\n\t// It's not obvious that this is the correct behavior. A better approach\n\t// (that's more in-line with how syncs work) might ignore files in earlier\n\t// path mappings when we know they're going to be \"synced\" over.\n\t// There's a bunch of subtle product decisions about how overlapping path\n\t// mappings work that we're not sure about.\n\tvar entries []archiveEntry\n\tfor _, p := range paths {\n\t\tnewEntries, err := a.entriesForPath(p.HostPath, p.ContainerPath)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"inspecting %q: %w\", p.HostPath, err)\n\t\t}\n\n\t\tentries = append(entries, newEntries...)\n\t}\n\n\tentries = dedupeEntries(entries)\n\tfor _, entry := range entries {\n\t\terr := a.writeEntry(entry)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"archiving %q: %w\", entry.path, err)\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (a *ArchiveBuilder) writeEntry(entry archiveEntry) error {\n\tpathInTar := entry.path\n\theader := entry.header\n\n\tif header.Typeflag != tar.TypeReg {\n\t\t// anything other than a regular file (e.g. dir, symlink) just needs the header\n\t\tif err := a.tw.WriteHeader(header); err != nil {\n\t\t\treturn fmt.Errorf(\"writing %q header: %w\", pathInTar, err)\n\t\t}\n\t\treturn nil\n\t}\n\n\tfile, err := os.Open(pathInTar)\n\tif err != nil {\n\t\t// In case the file has been deleted since we last looked at it.\n\t\tif os.IsNotExist(err) {\n\t\t\treturn nil\n\t\t}\n\t\treturn err\n\t}\n\n\tdefer func() {\n\t\t_ = file.Close()\n\t}()\n\n\t// The size header must match the number of contents bytes.\n\t//\n\t// There is room for a race condition here if something writes to the file\n\t// after we've read the file size.\n\t//\n\t// For small files, we avoid this by first copying the file into a buffer,\n\t// and using the size of the buffer to populate the header.\n\t//\n\t// For larger files, we don't want to copy the whole thing into a buffer,\n\t// because that would blow up heap size. There is some danger that this\n\t// will lead to a spurious error when the tar writer validates the sizes.\n\t// That error will be disruptive but will be handled as best as we\n\t// can downstream.\n\tuseBuf := header.Size < 5000000\n\tif useBuf {\n\t\ta.copyBuf.Reset()\n\t\t_, err = io.Copy(a.copyBuf, file)\n\t\tif err != nil && !errors.Is(err, io.EOF) {\n\t\t\treturn fmt.Errorf(\"copying %q: %w\", pathInTar, err)\n\t\t}\n\t\theader.Size = int64(len(a.copyBuf.Bytes()))\n\t}\n\n\t// wait to write the header until _after_ the file is successfully opened\n\t// to avoid generating an invalid tar entry that has a header but no contents\n\t// in the case the file has been deleted\n\terr = a.tw.WriteHeader(header)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"writing %q header: %w\", pathInTar, err)\n\t}\n\n\tif useBuf {\n\t\t_, err = io.Copy(a.tw, a.copyBuf)\n\t} else {\n\t\t_, err = io.Copy(a.tw, file)\n\t}\n\n\tif err != nil && !errors.Is(err, io.EOF) {\n\t\treturn fmt.Errorf(\"copying %q: %w\", pathInTar, err)\n\t}\n\n\t// explicitly flush so that if the entry is invalid we will detect it now and\n\t// provide a more meaningful error\n\tif err := a.tw.Flush(); err != nil {\n\t\treturn fmt.Errorf(\"finalizing %q: %w\", pathInTar, err)\n\t}\n\treturn nil\n}\n\n// entriesForPath writes the given source path into tarWriter at the given dest (recursively for directories).\n// e.g. tarring my_dir --> dest d: d/file_a, d/file_b\n// If source path does not exist, quietly skips it and returns no err\nfunc (a *ArchiveBuilder) entriesForPath(localPath, containerPath string) ([]archiveEntry, error) {\n\tlocalInfo, err := os.Stat(localPath)\n\tif err != nil {\n\t\tif os.IsNotExist(err) {\n\t\t\treturn nil, nil\n\t\t}\n\t\treturn nil, err\n\t}\n\n\tlocalPathIsDir := localInfo.IsDir()\n\tif localPathIsDir {\n\t\t// Make sure we can trim this off filenames to get valid relative filepaths\n\t\tif !strings.HasSuffix(localPath, string(filepath.Separator)) {\n\t\t\tlocalPath += string(filepath.Separator)\n\t\t}\n\t}\n\n\tcontainerPath = strings.TrimPrefix(containerPath, \"/\")\n\n\tresult := make([]archiveEntry, 0)\n\terr = filepath.Walk(localPath, func(curLocalPath string, info os.FileInfo, err error) error {\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"walking %q: %w\", curLocalPath, err)\n\t\t}\n\n\t\tlinkname := \"\"\n\t\tif info.Mode()&os.ModeSymlink != 0 {\n\t\t\tvar err error\n\t\t\tlinkname, err = os.Readlink(curLocalPath)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\n\t\tvar name string\n\t\t//nolint:gocritic\n\t\tif localPathIsDir {\n\t\t\t// Name of file in tar should be relative to source directory...\n\t\t\ttmp, err := filepath.Rel(localPath, curLocalPath)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"making %q relative to %q: %w\", curLocalPath, localPath, err)\n\t\t\t}\n\t\t\t// ...and live inside `dest`\n\t\t\tname = path.Join(containerPath, filepath.ToSlash(tmp))\n\t\t} else if strings.HasSuffix(containerPath, \"/\") {\n\t\t\tname = containerPath + filepath.Base(curLocalPath)\n\t\t} else {\n\t\t\tname = containerPath\n\t\t}\n\n\t\theader, err := archive.FileInfoHeader(name, info, linkname)\n\t\tif err != nil {\n\t\t\t// Not all types of files are allowed in a tarball. That's OK.\n\t\t\t// Mimic the Docker behavior and just skip the file.\n\t\t\treturn nil\n\t\t}\n\n\t\tresult = append(result, archiveEntry{\n\t\t\tpath:   curLocalPath,\n\t\t\tinfo:   info,\n\t\t\theader: header,\n\t\t})\n\n\t\treturn nil\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn result, nil\n}\n\nfunc tarArchive(ops []PathMapping) io.ReadCloser {\n\tpr, pw := io.Pipe()\n\tgo func() {\n\t\tab := NewArchiveBuilder(pw)\n\t\terr := ab.ArchivePathsIfExist(ops)\n\t\tif err != nil {\n\t\t\t_ = pw.CloseWithError(fmt.Errorf(\"adding files to tar: %w\", err))\n\t\t} else {\n\t\t\t// propagate errors from the TarWriter::Close() because it performs a final\n\t\t\t// Flush() and any errors mean the tar is invalid\n\t\t\tif err := ab.Close(); err != nil {\n\t\t\t\t_ = pw.CloseWithError(fmt.Errorf(\"closing tar: %w\", err))\n\t\t\t} else {\n\t\t\t\t_ = pw.Close()\n\t\t\t}\n\t\t}\n\t}()\n\treturn pr\n}\n\n// Dedupe the entries with last-entry-wins semantics.\nfunc dedupeEntries(entries []archiveEntry) []archiveEntry {\n\tseenIndex := make(map[string]int, len(entries))\n\tresult := make([]archiveEntry, 0, len(entries))\n\tfor i, entry := range entries {\n\t\tseenIndex[entry.header.Name] = i\n\t}\n\tfor i, entry := range entries {\n\t\tif seenIndex[entry.header.Name] == i {\n\t\t\tresult = append(result, entry)\n\t\t}\n\t}\n\treturn result\n}\n"
  },
  {
    "path": "internal/tracing/attributes.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage tracing\n\nimport (\n\t\"context\"\n\t\"crypto/sha256\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"go.opentelemetry.io/otel/attribute\"\n\t\"go.opentelemetry.io/otel/trace\"\n)\n\n// SpanOptions is a small helper type to make it easy to share the options helpers between\n// downstream functions that accept slices of trace.SpanStartOption and trace.EventOption.\ntype SpanOptions []trace.SpanStartEventOption\n\ntype MetricsKey struct{}\n\ntype Metrics struct {\n\tCountExtends        int\n\tCountIncludesLocal  int\n\tCountIncludesRemote int\n}\n\nfunc (s SpanOptions) SpanStartOptions() []trace.SpanStartOption {\n\tout := make([]trace.SpanStartOption, len(s))\n\tfor i := range s {\n\t\tout[i] = s[i]\n\t}\n\treturn out\n}\n\nfunc (s SpanOptions) EventOptions() []trace.EventOption {\n\tout := make([]trace.EventOption, len(s))\n\tfor i := range s {\n\t\tout[i] = s[i]\n\t}\n\treturn out\n}\n\n// ProjectOptions returns common attributes from a Compose project.\n//\n// For convenience, it's returned as a SpanOptions object to allow it to be\n// passed directly to the wrapping helper methods in this package such as\n// SpanWrapFunc.\nfunc ProjectOptions(ctx context.Context, proj *types.Project) SpanOptions {\n\tif proj == nil {\n\t\treturn nil\n\t}\n\tcapabilities, gpu, tpu := proj.ServicesWithCapabilities()\n\tattrs := []attribute.KeyValue{\n\t\tattribute.String(\"project.name\", proj.Name),\n\t\tattribute.String(\"project.dir\", proj.WorkingDir),\n\t\tattribute.StringSlice(\"project.compose_files\", proj.ComposeFiles),\n\t\tattribute.StringSlice(\"project.profiles\", proj.Profiles),\n\t\tattribute.StringSlice(\"project.volumes\", proj.VolumeNames()),\n\t\tattribute.StringSlice(\"project.networks\", proj.NetworkNames()),\n\t\tattribute.StringSlice(\"project.secrets\", proj.SecretNames()),\n\t\tattribute.StringSlice(\"project.configs\", proj.ConfigNames()),\n\t\tattribute.StringSlice(\"project.models\", proj.ModelNames()),\n\t\tattribute.StringSlice(\"project.extensions\", keys(proj.Extensions)),\n\t\tattribute.StringSlice(\"project.services.active\", proj.ServiceNames()),\n\t\tattribute.StringSlice(\"project.services.disabled\", proj.DisabledServiceNames()),\n\t\tattribute.StringSlice(\"project.services.build\", proj.ServicesWithBuild()),\n\t\tattribute.StringSlice(\"project.services.depends_on\", proj.ServicesWithDependsOn()),\n\t\tattribute.StringSlice(\"project.services.models\", proj.ServicesWithModels()),\n\t\tattribute.StringSlice(\"project.services.capabilities\", capabilities),\n\t\tattribute.StringSlice(\"project.services.capabilities.gpu\", gpu),\n\t\tattribute.StringSlice(\"project.services.capabilities.tpu\", tpu),\n\t}\n\tif metrics, ok := ctx.Value(MetricsKey{}).(Metrics); ok {\n\t\tattrs = append(attrs, attribute.Int(\"project.services.extends\", metrics.CountExtends))\n\t\tattrs = append(attrs, attribute.Int(\"project.includes.local\", metrics.CountIncludesLocal))\n\t\tattrs = append(attrs, attribute.Int(\"project.includes.remote\", metrics.CountIncludesRemote))\n\t}\n\n\tif projHash, ok := projectHash(proj); ok {\n\t\tattrs = append(attrs, attribute.String(\"project.hash\", projHash))\n\t}\n\treturn []trace.SpanStartEventOption{\n\t\ttrace.WithAttributes(attrs...),\n\t}\n}\n\n// ServiceOptions returns common attributes from a Compose service.\n//\n// For convenience, it's returned as a SpanOptions object to allow it to be\n// passed directly to the wrapping helper methods in this package such as\n// SpanWrapFunc.\nfunc ServiceOptions(service types.ServiceConfig) SpanOptions {\n\tattrs := []attribute.KeyValue{\n\t\tattribute.String(\"service.name\", service.Name),\n\t\tattribute.String(\"service.image\", service.Image),\n\t\tattribute.StringSlice(\"service.networks\", keys(service.Networks)),\n\t\tattribute.StringSlice(\"service.models\", keys(service.Models)),\n\t}\n\n\tconfigNames := make([]string, len(service.Configs))\n\tfor i := range service.Configs {\n\t\tconfigNames[i] = service.Configs[i].Source\n\t}\n\tattrs = append(attrs, attribute.StringSlice(\"service.configs\", configNames))\n\n\tsecretNames := make([]string, len(service.Secrets))\n\tfor i := range service.Secrets {\n\t\tsecretNames[i] = service.Secrets[i].Source\n\t}\n\tattrs = append(attrs, attribute.StringSlice(\"service.secrets\", secretNames))\n\n\tvolNames := make([]string, len(service.Volumes))\n\tfor i := range service.Volumes {\n\t\tvolNames[i] = service.Volumes[i].Source\n\t}\n\tattrs = append(attrs, attribute.StringSlice(\"service.volumes\", volNames))\n\n\treturn []trace.SpanStartEventOption{\n\t\ttrace.WithAttributes(attrs...),\n\t}\n}\n\n// ContainerOptions returns common attributes from a Moby container.\n//\n// For convenience, it's returned as a SpanOptions object to allow it to be\n// passed directly to the wrapping helper methods in this package such as\n// SpanWrapFunc.\nfunc ContainerOptions(ctr container.Summary) SpanOptions {\n\tattrs := []attribute.KeyValue{\n\t\tattribute.String(\"container.id\", ctr.ID),\n\t\tattribute.String(\"container.image\", ctr.Image),\n\t\tunixTimeAttr(\"container.created_at\", ctr.Created),\n\t}\n\n\tif len(ctr.Names) != 0 {\n\t\tattrs = append(attrs, attribute.String(\"container.name\", strings.TrimPrefix(ctr.Names[0], \"/\")))\n\t}\n\n\treturn []trace.SpanStartEventOption{\n\t\ttrace.WithAttributes(attrs...),\n\t}\n}\n\nfunc keys[T any](m map[string]T) []string {\n\tout := make([]string, 0, len(m))\n\tfor k := range m {\n\t\tout = append(out, k)\n\t}\n\treturn out\n}\n\nfunc timeAttr(key string, value time.Time) attribute.KeyValue {\n\treturn attribute.String(key, value.Format(time.RFC3339))\n}\n\nfunc unixTimeAttr(key string, value int64) attribute.KeyValue {\n\treturn timeAttr(key, time.Unix(value, 0).UTC())\n}\n\n// projectHash returns a checksum from the JSON encoding of the project.\nfunc projectHash(p *types.Project) (string, bool) {\n\tif p == nil {\n\t\treturn \"\", false\n\t}\n\t// disabled services aren't included in the output, so make a copy with\n\t// all the services active for hashing\n\tvar err error\n\tp, err = p.WithServicesEnabled(append(p.ServiceNames(), p.DisabledServiceNames()...)...)\n\tif err != nil {\n\t\treturn \"\", false\n\t}\n\tprojData, err := json.Marshal(p)\n\tif err != nil {\n\t\treturn \"\", false\n\t}\n\td := sha256.Sum256(projData)\n\treturn fmt.Sprintf(\"%x\", d), true\n}\n"
  },
  {
    "path": "internal/tracing/attributes_test.go",
    "content": "/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage tracing\n\nimport (\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc TestProjectHash(t *testing.T) {\n\tprojA := &types.Project{\n\t\tName:       \"fake-proj\",\n\t\tWorkingDir: \"/tmp\",\n\t\tServices: map[string]types.ServiceConfig{\n\t\t\t\"foo\": {Image: \"fake-image\"},\n\t\t},\n\t\tDisabledServices: map[string]types.ServiceConfig{\n\t\t\t\"bar\": {Image: \"diff-image\"},\n\t\t},\n\t}\n\tprojB := &types.Project{\n\t\tName:       \"fake-proj\",\n\t\tWorkingDir: \"/tmp\",\n\t\tServices: map[string]types.ServiceConfig{\n\t\t\t\"foo\": {Image: \"fake-image\"},\n\t\t\t\"bar\": {Image: \"diff-image\"},\n\t\t},\n\t}\n\tprojC := &types.Project{\n\t\tName:       \"fake-proj\",\n\t\tWorkingDir: \"/tmp\",\n\t\tServices: map[string]types.ServiceConfig{\n\t\t\t\"foo\": {Image: \"fake-image\"},\n\t\t\t\"bar\": {Image: \"diff-image\"},\n\t\t\t\"baz\": {Image: \"yet-another-image\"},\n\t\t},\n\t}\n\n\thashA, ok := projectHash(projA)\n\trequire.True(t, ok)\n\trequire.NotEmpty(t, hashA)\n\thashB, ok := projectHash(projB)\n\trequire.True(t, ok)\n\trequire.NotEmpty(t, hashB)\n\trequire.Equal(t, hashA, hashB)\n\n\thashC, ok := projectHash(projC)\n\trequire.True(t, ok)\n\trequire.NotEmpty(t, hashC)\n\trequire.NotEqual(t, hashC, hashA)\n}\n"
  },
  {
    "path": "internal/tracing/docker_context.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage tracing\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/docker/cli/cli/context/store\"\n\t\"go.opentelemetry.io/otel/exporters/otlp/otlptrace\"\n\t\"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc\"\n\t\"google.golang.org/grpc\"\n\t\"google.golang.org/grpc/credentials/insecure\"\n\n\t\"github.com/docker/compose/v5/internal/memnet\"\n)\n\nconst otelConfigFieldName = \"otel\"\n\n// traceClientFromDockerContext creates a gRPC OTLP client based on metadata\n// from the active Docker CLI context.\nfunc traceClientFromDockerContext(dockerCli command.Cli, otelEnv envMap) (otlptrace.Client, error) {\n\t// attempt to extract an OTEL config from the Docker context to enable\n\t// automatic integration with Docker Desktop;\n\tcfg, err := ConfigFromDockerContext(dockerCli.ContextStore(), dockerCli.CurrentContext())\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"loading otel config from docker context metadata: %w\", err)\n\t}\n\n\tif cfg.Endpoint == \"\" {\n\t\treturn nil, nil\n\t}\n\n\t// HACK: unfortunately _all_ public OTEL initialization functions\n\t// \timplicitly read from the OS env, so temporarily unset them all and\n\t// \trestore afterwards\n\tdefer func() {\n\t\tfor k, v := range otelEnv {\n\t\t\tif err := os.Setenv(k, v); err != nil {\n\t\t\t\tpanic(fmt.Errorf(\"restoring env for %q: %w\", k, err))\n\t\t\t}\n\t\t}\n\t}()\n\tfor k := range otelEnv {\n\t\tif err := os.Unsetenv(k); err != nil {\n\t\t\treturn nil, fmt.Errorf(\"stashing env for %q: %w\", k, err)\n\t\t}\n\t}\n\n\tconn, err := grpc.NewClient(cfg.Endpoint,\n\t\tgrpc.WithContextDialer(memnet.DialEndpoint),\n\t\t// this dial is restricted to using a local Unix socket / named pipe,\n\t\t// so there is no need for TLS\n\t\tgrpc.WithTransportCredentials(insecure.NewCredentials()),\n\t)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"initializing otel connection from docker context metadata: %w\", err)\n\t}\n\n\tclient := otlptracegrpc.NewClient(otlptracegrpc.WithGRPCConn(conn))\n\treturn client, nil\n}\n\n// ConfigFromDockerContext inspects extra metadata included as part of the\n// specified Docker context to try and extract a valid OTLP client configuration.\nfunc ConfigFromDockerContext(st store.Store, name string) (OTLPConfig, error) {\n\tmeta, err := st.GetMetadata(name)\n\tif err != nil {\n\t\treturn OTLPConfig{}, err\n\t}\n\n\tvar otelCfg any\n\tswitch m := meta.Metadata.(type) {\n\tcase command.DockerContext:\n\t\totelCfg = m.AdditionalFields[otelConfigFieldName]\n\tcase map[string]any:\n\t\totelCfg = m[otelConfigFieldName]\n\t}\n\tif otelCfg == nil {\n\t\treturn OTLPConfig{}, nil\n\t}\n\n\totelMap, ok := otelCfg.(map[string]any)\n\tif !ok {\n\t\treturn OTLPConfig{}, fmt.Errorf(\n\t\t\t\"unexpected type for field %q: %T (expected: %T)\",\n\t\t\totelConfigFieldName,\n\t\t\totelCfg,\n\t\t\totelMap,\n\t\t)\n\t}\n\n\t// keys from https://opentelemetry.io/docs/concepts/sdk-configuration/otlp-exporter-configuration/\n\tcfg := OTLPConfig{\n\t\tEndpoint: valueOrDefault[string](otelMap, \"OTEL_EXPORTER_OTLP_ENDPOINT\"),\n\t}\n\treturn cfg, nil\n}\n\n// valueOrDefault returns the type-cast value at the specified key in the map\n// if present and the correct type; otherwise, it returns the default value for\n// T.\nfunc valueOrDefault[T any](m map[string]any, key string) T {\n\tif v, ok := m[key].(T); ok {\n\t\treturn v\n\t}\n\treturn *new(T)\n}\n"
  },
  {
    "path": "internal/tracing/errors.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage tracing\n\nimport (\n\t\"go.opentelemetry.io/otel\"\n)\n\n// skipErrors is a no-op otel.ErrorHandler.\ntype skipErrors struct{}\n\n// Handle does nothing, ignoring any errors passed to it.\nfunc (skipErrors) Handle(_ error) {}\n\nvar _ otel.ErrorHandler = skipErrors{}\n"
  },
  {
    "path": "internal/tracing/keyboard_metrics.go",
    "content": "/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage tracing\n\nimport (\n\t\"context\"\n\n\t\"go.opentelemetry.io/otel/attribute\"\n)\n\nfunc KeyboardMetrics(ctx context.Context, enabled, isDockerDesktopActive bool) {\n\tcommandAvailable := []string{}\n\tif isDockerDesktopActive {\n\t\tcommandAvailable = append(commandAvailable, \"gui\")\n\t\tcommandAvailable = append(commandAvailable, \"gui/composeview\")\n\t}\n\n\tAddAttributeToSpan(ctx,\n\t\tattribute.Bool(\"navmenu.enabled\", enabled),\n\t\tattribute.StringSlice(\"navmenu.command_available\", commandAvailable))\n}\n"
  },
  {
    "path": "internal/tracing/mux.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage tracing\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"sync\"\n\n\tsdktrace \"go.opentelemetry.io/otel/sdk/trace\"\n)\n\ntype MuxExporter struct {\n\texporters []sdktrace.SpanExporter\n}\n\nfunc (m MuxExporter) ExportSpans(ctx context.Context, spans []sdktrace.ReadOnlySpan) error {\n\tvar (\n\t\twg    sync.WaitGroup\n\t\terrMu sync.Mutex\n\t\terrs  = make([]error, 0, len(m.exporters))\n\t)\n\n\tfor _, exporter := range m.exporters {\n\t\twg.Go(func() {\n\t\t\tif err := exporter.ExportSpans(ctx, spans); err != nil {\n\t\t\t\terrMu.Lock()\n\t\t\t\terrs = append(errs, err)\n\t\t\t\terrMu.Unlock()\n\t\t\t}\n\t\t})\n\t}\n\twg.Wait()\n\treturn errors.Join(errs...)\n}\n\nfunc (m MuxExporter) Shutdown(ctx context.Context) error {\n\tvar (\n\t\twg    sync.WaitGroup\n\t\terrMu sync.Mutex\n\t\terrs  = make([]error, 0, len(m.exporters))\n\t)\n\n\tfor _, exporter := range m.exporters {\n\t\twg.Go(func() {\n\t\t\tif err := exporter.Shutdown(ctx); err != nil {\n\t\t\t\terrMu.Lock()\n\t\t\t\terrs = append(errs, err)\n\t\t\t\terrMu.Unlock()\n\t\t\t}\n\t\t})\n\t}\n\twg.Wait()\n\treturn errors.Join(errs...)\n}\n"
  },
  {
    "path": "internal/tracing/tracing.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage tracing\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/moby/buildkit/util/tracing/detect\"\n\t_ \"github.com/moby/buildkit/util/tracing/env\" //nolint:blank-imports\n\t\"go.opentelemetry.io/otel\"\n\t\"go.opentelemetry.io/otel/attribute\"\n\t\"go.opentelemetry.io/otel/exporters/otlp/otlptrace\"\n\t\"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc\"\n\t\"go.opentelemetry.io/otel/propagation\"\n\t\"go.opentelemetry.io/otel/sdk/resource\"\n\tsdktrace \"go.opentelemetry.io/otel/sdk/trace\"\n\tsemconv \"go.opentelemetry.io/otel/semconv/v1.21.0\"\n\n\t\"github.com/docker/compose/v5/internal\"\n)\n\nfunc init() {\n\tdetect.ServiceName = \"compose\"\n\t// do not log tracing errors to stdio\n\totel.SetErrorHandler(skipErrors{})\n}\n\n// OTLPConfig contains the necessary values to initialize an OTLP client\n// manually.\n//\n// This supports a minimal set of options based on what is necessary for\n// automatic OTEL configuration from Docker context metadata.\ntype OTLPConfig struct {\n\tEndpoint string\n}\n\n// ShutdownFunc flushes and stops an OTEL exporter.\ntype ShutdownFunc func(ctx context.Context) error\n\n// envMap is a convenience type for OS environment variables.\ntype envMap map[string]string\n\nfunc InitTracing(dockerCli command.Cli) (ShutdownFunc, error) {\n\t// set global propagator to tracecontext (the default is no-op).\n\totel.SetTextMapPropagator(propagation.TraceContext{})\n\treturn InitProvider(dockerCli)\n}\n\nfunc InitProvider(dockerCli command.Cli) (ShutdownFunc, error) {\n\tctx := context.Background()\n\n\tvar errs []error\n\tvar exporters []sdktrace.SpanExporter\n\n\tenvClient, otelEnv := traceClientFromEnv()\n\tif envClient != nil {\n\t\tif envExporter, err := otlptrace.New(ctx, envClient); err != nil {\n\t\t\terrs = append(errs, err)\n\t\t} else if envExporter != nil {\n\t\t\texporters = append(exporters, envExporter)\n\t\t}\n\t}\n\n\tif dcClient, err := traceClientFromDockerContext(dockerCli, otelEnv); err != nil {\n\t\terrs = append(errs, err)\n\t} else if dcClient != nil {\n\t\tif dcExporter, err := otlptrace.New(ctx, dcClient); err != nil {\n\t\t\terrs = append(errs, err)\n\t\t} else if dcExporter != nil {\n\t\t\texporters = append(exporters, dcExporter)\n\t\t}\n\t}\n\tif len(errs) != 0 {\n\t\treturn nil, errors.Join(errs...)\n\t}\n\n\tres, err := resource.New(\n\t\tctx,\n\t\tresource.WithAttributes(\n\t\t\tsemconv.ServiceName(\"compose\"),\n\t\t\tsemconv.ServiceVersion(internal.Version),\n\t\t\tattribute.String(\"docker.context\", dockerCli.CurrentContext()),\n\t\t),\n\t)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to create resource: %w\", err)\n\t}\n\n\tmuxExporter := MuxExporter{exporters: exporters}\n\ttracerProvider := sdktrace.NewTracerProvider(\n\t\tsdktrace.WithResource(res),\n\t\tsdktrace.WithBatcher(muxExporter),\n\t)\n\totel.SetTracerProvider(tracerProvider)\n\n\t// Shutdown will flush any remaining spans and shut down the exporter.\n\treturn tracerProvider.Shutdown, nil\n}\n\n// traceClientFromEnv creates a GRPC OTLP client based on OS environment\n// variables.\n//\n// https://opentelemetry.io/docs/concepts/sdk-configuration/otlp-exporter-configuration/\nfunc traceClientFromEnv() (otlptrace.Client, envMap) {\n\thasOtelEndpointInEnv := false\n\totelEnv := make(map[string]string)\n\tfor _, kv := range os.Environ() {\n\t\tk, v, ok := strings.Cut(kv, \"=\")\n\t\tif !ok {\n\t\t\tcontinue\n\t\t}\n\t\tif strings.HasPrefix(k, \"OTEL_\") {\n\t\t\totelEnv[k] = v\n\t\t\tif strings.HasSuffix(k, \"ENDPOINT\") {\n\t\t\t\thasOtelEndpointInEnv = true\n\t\t\t}\n\t\t}\n\t}\n\n\tif !hasOtelEndpointInEnv {\n\t\treturn nil, nil\n\t}\n\n\tclient := otlptracegrpc.NewClient()\n\treturn client, otelEnv\n}\n"
  },
  {
    "path": "internal/tracing/tracing_test.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage tracing_test\n\nimport (\n\t\"testing\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/docker/cli/cli/context/store\"\n\t\"github.com/stretchr/testify/require\"\n\n\t\"github.com/docker/compose/v5/internal/tracing\"\n)\n\nvar testStoreCfg = store.NewConfig(\n\tfunc() any {\n\t\treturn &map[string]any{}\n\t},\n)\n\nfunc TestExtractOtelFromContext(t *testing.T) {\n\tif testing.Short() {\n\t\tt.Skip(\"Requires filesystem access\")\n\t}\n\n\tdir := t.TempDir()\n\n\tst := store.New(dir, testStoreCfg)\n\terr := st.CreateOrUpdate(store.Metadata{\n\t\tName: \"test\",\n\t\tMetadata: command.DockerContext{\n\t\t\tDescription: t.Name(),\n\t\t\tAdditionalFields: map[string]any{\n\t\t\t\t\"otel\": map[string]any{\n\t\t\t\t\t\"OTEL_EXPORTER_OTLP_ENDPOINT\": \"localhost:1234\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\tEndpoints: make(map[string]any),\n\t})\n\trequire.NoError(t, err)\n\n\tcfg, err := tracing.ConfigFromDockerContext(st, \"test\")\n\trequire.NoError(t, err)\n\trequire.Equal(t, \"localhost:1234\", cfg.Endpoint)\n}\n"
  },
  {
    "path": "internal/tracing/wrap.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage tracing\n\nimport (\n\t\"context\"\n\n\t\"github.com/acarl005/stripansi\"\n\t\"go.opentelemetry.io/otel\"\n\t\"go.opentelemetry.io/otel/attribute\"\n\t\"go.opentelemetry.io/otel/codes\"\n\tsemconv \"go.opentelemetry.io/otel/semconv/v1.19.0\"\n\t\"go.opentelemetry.io/otel/trace\"\n)\n\n// SpanWrapFunc wraps a function that takes a context with a trace.Span, marking the status as codes.Error if the\n// wrapped function returns an error.\n//\n// The context passed to the function is created from the span to ensure correct propagation.\n//\n// NOTE: This function is nearly identical to SpanWrapFuncForErrGroup, except the latter is designed specially for\n// convenience with errgroup.Group due to its prevalence throughout the codebase. The code is duplicated to avoid\n// adding even more levels of function wrapping/indirection.\nfunc SpanWrapFunc(spanName string, opts SpanOptions, fn func(ctx context.Context) error) func(context.Context) error {\n\treturn func(ctx context.Context) error {\n\t\tctx, span := otel.Tracer(\"\").Start(ctx, spanName, opts.SpanStartOptions()...)\n\t\tdefer span.End()\n\n\t\tif err := fn(ctx); err != nil {\n\t\t\tspan.SetStatus(codes.Error, err.Error())\n\t\t\treturn err\n\t\t}\n\n\t\tspan.SetStatus(codes.Ok, \"\")\n\t\treturn nil\n\t}\n}\n\n// SpanWrapFuncForErrGroup wraps a function that takes a context with a trace.Span, marking the status as codes.Error\n// if the wrapped function returns an error.\n//\n// The context passed to the function is created from the span to ensure correct propagation.\n//\n// NOTE: This function is nearly identical to SpanWrapFunc, except this function is designed specially for\n// convenience with errgroup.Group due to its prevalence throughout the codebase. The code is duplicated to avoid\n// adding even more levels of function wrapping/indirection.\nfunc SpanWrapFuncForErrGroup(ctx context.Context, spanName string, opts SpanOptions, fn func(ctx context.Context) error) func() error {\n\treturn func() error {\n\t\tctx, span := otel.Tracer(\"\").Start(ctx, spanName, opts.SpanStartOptions()...)\n\t\tdefer span.End()\n\n\t\tif err := fn(ctx); err != nil {\n\t\t\tspan.SetStatus(codes.Error, err.Error())\n\t\t\treturn err\n\t\t}\n\n\t\tspan.SetStatus(codes.Ok, \"\")\n\t\treturn nil\n\t}\n}\n\n// EventWrapFuncForErrGroup invokes a function and records an event, optionally including the returned\n// error as the \"exception message\" on the event.\n//\n// This is intended for lightweight usage to wrap errgroup.Group calls where a full span is not desired.\nfunc EventWrapFuncForErrGroup(ctx context.Context, eventName string, opts SpanOptions, fn func(ctx context.Context) error) func() error {\n\treturn func() error {\n\t\tspan := trace.SpanFromContext(ctx)\n\t\teventOpts := opts.EventOptions()\n\n\t\terr := fn(ctx)\n\t\tif err != nil {\n\t\t\teventOpts = append(eventOpts, trace.WithAttributes(semconv.ExceptionMessage(stripansi.Strip(err.Error()))))\n\t\t}\n\t\tspan.AddEvent(eventName, eventOpts...)\n\n\t\treturn err\n\t}\n}\n\nfunc AddAttributeToSpan(ctx context.Context, attr ...attribute.KeyValue) {\n\tspan := trace.SpanFromContext(ctx)\n\tspan.SetAttributes(attr...)\n}\n"
  },
  {
    "path": "internal/variables.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage internal\n\n// Version is the version of the CLI injected in compilation time\nvar Version = \"dev\"\n"
  },
  {
    "path": "pkg/api/api.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage api\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"slices\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/compose-spec/compose-go/v2/cli\"\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/platforms\"\n\t\"github.com/docker/cli/opts\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/volume\"\n)\n\n// LoadListener receives events during project loading.\n// Events include:\n//   - \"extends\": when a service extends another (metadata: service info)\n//   - \"include\": when including external compose files (metadata: {\"path\": StringList})\n//\n// Multiple listeners can be registered, and all will be notified of events.\ntype LoadListener func(event string, metadata map[string]any)\n\n// ProjectLoadOptions configures how a Compose project should be loaded\ntype ProjectLoadOptions struct {\n\t// ProjectName to use, or empty to infer from directory\n\tProjectName string\n\t// ConfigPaths are paths to compose files\n\tConfigPaths []string\n\t// WorkingDir is the project directory\n\tWorkingDir string\n\t// EnvFiles are paths to .env files\n\tEnvFiles []string\n\t// Profiles to activate\n\tProfiles []string\n\t// Services to select (empty = all)\n\tServices []string\n\t// Offline mode disables remote resource loading\n\tOffline bool\n\t// All includes all resources (not just those used by services)\n\tAll bool\n\t// Compatibility enables v1 compatibility mode\n\tCompatibility bool\n\n\t// ProjectOptionsFns are compose-go project options to apply.\n\t// Use cli.WithInterpolation(false), cli.WithNormalization(false), etc.\n\t// This is optional - pass nil or empty slice to use defaults.\n\tProjectOptionsFns []cli.ProjectOptionsFn\n\n\t// LoadListeners receive events during project loading.\n\t// All registered listeners will be notified of events.\n\t// This is optional - pass nil or empty slice if not needed.\n\tLoadListeners []LoadListener\n\n\tOCI OCIOptions\n}\n\ntype OCIOptions struct {\n\tInsecureRegistries []string\n}\n\n// Compose is the API interface one can use to programmatically use docker/compose in a third-party software\n// Use [compose.NewComposeService] to get an actual instance\ntype Compose interface {\n\t// Build executes the equivalent to a `compose build`\n\tBuild(ctx context.Context, project *types.Project, options BuildOptions) error\n\t// Push executes the equivalent to a `compose push`\n\tPush(ctx context.Context, project *types.Project, options PushOptions) error\n\t// Pull executes the equivalent of a `compose pull`\n\tPull(ctx context.Context, project *types.Project, options PullOptions) error\n\t// Create executes the equivalent to a `compose create`\n\tCreate(ctx context.Context, project *types.Project, options CreateOptions) error\n\t// Start executes the equivalent to a `compose start`\n\tStart(ctx context.Context, projectName string, options StartOptions) error\n\t// Restart restarts containers\n\tRestart(ctx context.Context, projectName string, options RestartOptions) error\n\t// Stop executes the equivalent to a `compose stop`\n\tStop(ctx context.Context, projectName string, options StopOptions) error\n\t// Up executes the equivalent to a `compose up`\n\tUp(ctx context.Context, project *types.Project, options UpOptions) error\n\t// Down executes the equivalent to a `compose down`\n\tDown(ctx context.Context, projectName string, options DownOptions) error\n\t// Logs executes the equivalent to a `compose logs`\n\tLogs(ctx context.Context, projectName string, consumer LogConsumer, options LogOptions) error\n\t// Ps executes the equivalent to a `compose ps`\n\tPs(ctx context.Context, projectName string, options PsOptions) ([]ContainerSummary, error)\n\t// List executes the equivalent to a `docker stack ls`\n\tList(ctx context.Context, options ListOptions) ([]Stack, error)\n\t// Kill executes the equivalent to a `compose kill`\n\tKill(ctx context.Context, projectName string, options KillOptions) error\n\t// RunOneOffContainer creates a service oneoff container and starts its dependencies\n\tRunOneOffContainer(ctx context.Context, project *types.Project, opts RunOptions) (int, error)\n\t// Remove executes the equivalent to a `compose rm`\n\tRemove(ctx context.Context, projectName string, options RemoveOptions) error\n\t// Exec executes a command in a running service container\n\tExec(ctx context.Context, projectName string, options RunOptions) (int, error)\n\t// Attach STDIN,STDOUT,STDERR to a running service container\n\tAttach(ctx context.Context, projectName string, options AttachOptions) error\n\t// Copy copies a file/folder between a service container and the local filesystem\n\tCopy(ctx context.Context, projectName string, options CopyOptions) error\n\t// Pause executes the equivalent to a `compose pause`\n\tPause(ctx context.Context, projectName string, options PauseOptions) error\n\t// UnPause executes the equivalent to a `compose unpause`\n\tUnPause(ctx context.Context, projectName string, options PauseOptions) error\n\t// Top executes the equivalent to a `compose top`\n\tTop(ctx context.Context, projectName string, services []string) ([]ContainerProcSummary, error)\n\t// Events executes the equivalent to a `compose events`\n\tEvents(ctx context.Context, projectName string, options EventsOptions) error\n\t// Port executes the equivalent to a `compose port`\n\tPort(ctx context.Context, projectName string, service string, port uint16, options PortOptions) (string, int, error)\n\t// Publish executes the equivalent to a `compose publish`\n\tPublish(ctx context.Context, project *types.Project, repository string, options PublishOptions) error\n\t// Images executes the equivalent of a `compose images`\n\tImages(ctx context.Context, projectName string, options ImagesOptions) (map[string]ImageSummary, error)\n\t// Watch services' development context and sync/notify/rebuild/restart on changes\n\tWatch(ctx context.Context, project *types.Project, options WatchOptions) error\n\t// Viz generates a graphviz graph of the project services\n\tViz(ctx context.Context, project *types.Project, options VizOptions) (string, error)\n\t// Wait blocks until at least one of the services' container exits\n\tWait(ctx context.Context, projectName string, options WaitOptions) (int64, error)\n\t// Scale manages numbers of container instances running per service\n\tScale(ctx context.Context, project *types.Project, options ScaleOptions) error\n\t// Export a service container's filesystem as a tar archive\n\tExport(ctx context.Context, projectName string, options ExportOptions) error\n\t// Create a new image from a service container's changes\n\tCommit(ctx context.Context, projectName string, options CommitOptions) error\n\t// Generate generates a Compose Project from existing containers\n\tGenerate(ctx context.Context, options GenerateOptions) (*types.Project, error)\n\t// Volumes executes the equivalent to a `docker volume ls`\n\tVolumes(ctx context.Context, project string, options VolumesOptions) ([]VolumesSummary, error)\n\t// LoadProject loads and validates a Compose project from configuration files.\n\tLoadProject(ctx context.Context, options ProjectLoadOptions) (*types.Project, error)\n}\n\ntype VolumesOptions struct {\n\tServices []string\n}\n\ntype VolumesSummary = volume.Volume\n\ntype ScaleOptions struct {\n\tServices []string\n}\n\ntype WaitOptions struct {\n\t// Services passed in the command line to be waited\n\tServices []string\n\t// Executes a down when a container exits\n\tDownProjectOnContainerExit bool\n}\n\ntype VizOptions struct {\n\t// IncludeNetworks if true, network names a container is attached to should appear in the graph node\n\tIncludeNetworks bool\n\t// IncludePorts if true, ports a container exposes should appear in the graph node\n\tIncludePorts bool\n\t// IncludeImageName if true, name of the image used to create a container should appear in the graph node\n\tIncludeImageName bool\n\t// Indentation string to be used to indent graphviz code, e.g. \"\\t\", \"    \"\n\tIndentation string\n}\n\n// WatchLogger is a reserved name to log watch events\nconst WatchLogger = \"#watch\"\n\n// WatchOptions group options of the Watch API\ntype WatchOptions struct {\n\tBuild    *BuildOptions\n\tLogTo    LogConsumer\n\tPrune    bool\n\tServices []string\n}\n\n// BuildOptions group options of the Build API\ntype BuildOptions struct {\n\t// Pull always attempt to pull a newer version of the image\n\tPull bool\n\t// Push pushes service images\n\tPush bool\n\t// Progress set type of progress output (\"auto\", \"plain\", \"tty\")\n\tProgress string\n\t// Args set build-time args\n\tArgs types.MappingWithEquals\n\t// NoCache disables cache use\n\tNoCache bool\n\t// Quiet make the build process not output to the console\n\tQuiet bool\n\t// Services passed in the command line to be built\n\tServices []string\n\t// Deps also build selected services dependencies\n\tDeps bool\n\t// Ssh authentications passed in the command line\n\tSSHs []types.SSHKey\n\t// Memory limit for the build container\n\tMemory int64\n\t// Builder name passed in the command line\n\tBuilder string\n\t// Print don't actually run builder but print equivalent build config\n\tPrint bool\n\t// Check let builder validate build configuration\n\tCheck bool\n\t// Attestations allows to enable attestations generation\n\tAttestations bool\n\t// Provenance generate a provenance attestation\n\tProvenance string\n\t// SBOM generate a SBOM attestation\n\tSBOM string\n\t// Out is the stream to write build progress\n\tOut io.Writer\n}\n\n// Apply mutates project according to build options\nfunc (o BuildOptions) Apply(project *types.Project) error {\n\tplatform := project.Environment[\"DOCKER_DEFAULT_PLATFORM\"]\n\tfor name, service := range project.Services {\n\t\tif service.Provider == nil && service.Image == \"\" && service.Build == nil {\n\t\t\treturn fmt.Errorf(\"invalid service %q. Must specify either image or build\", name)\n\t\t}\n\n\t\tif service.Build == nil {\n\t\t\tcontinue\n\t\t}\n\t\tif platform != \"\" {\n\t\t\tif len(service.Build.Platforms) > 0 && !slices.Contains(service.Build.Platforms, platform) {\n\t\t\t\treturn fmt.Errorf(\"service %q build.platforms does not support value set by DOCKER_DEFAULT_PLATFORM: %s\", name, platform)\n\t\t\t}\n\t\t\tservice.Platform = platform\n\t\t}\n\t\tif service.Platform != \"\" {\n\t\t\tif len(service.Build.Platforms) > 0 && !slices.Contains(service.Build.Platforms, service.Platform) {\n\t\t\t\treturn fmt.Errorf(\"service %q build configuration does not support platform: %s\", name, service.Platform)\n\t\t\t}\n\t\t}\n\n\t\tservice.Build.Pull = service.Build.Pull || o.Pull\n\t\tservice.Build.NoCache = service.Build.NoCache || o.NoCache\n\n\t\tproject.Services[name] = service\n\t}\n\treturn nil\n}\n\n// CreateOptions group options of the Create API\ntype CreateOptions struct {\n\tBuild *BuildOptions\n\t// Services defines the services user interacts with\n\tServices []string\n\t// Remove legacy containers for services that are not defined in the project\n\tRemoveOrphans bool\n\t// Ignore legacy containers for services that are not defined in the project\n\tIgnoreOrphans bool\n\t// Recreate define the strategy to apply on existing containers\n\tRecreate string\n\t// RecreateDependencies define the strategy to apply on dependencies services\n\tRecreateDependencies string\n\t// Inherit reuse anonymous volumes from previous container\n\tInherit bool\n\t// Timeout set delay to wait for container to gracefully stop before sending SIGKILL\n\tTimeout *time.Duration\n\t// QuietPull makes the pulling process quiet\n\tQuietPull bool\n}\n\n// StartOptions group options of the Start API\ntype StartOptions struct {\n\t// Project is the compose project used to define this app. Might be nil if user ran command just with project name\n\tProject *types.Project\n\t// Attach to container and forward logs if not nil\n\tAttach LogConsumer\n\t// AttachTo set the services to attach to\n\tAttachTo []string\n\t// OnExit defines behavior when a container stops\n\tOnExit Cascade\n\t// ExitCodeFrom return exit code from specified service\n\tExitCodeFrom string\n\t// Wait won't return until containers reached the running|healthy state\n\tWait        bool\n\tWaitTimeout time.Duration\n\t// Services passed in the command line to be started\n\tServices       []string\n\tWatch          bool\n\tNavigationMenu bool\n}\n\ntype Cascade int\n\nconst (\n\tCascadeIgnore Cascade = iota\n\tCascadeStop   Cascade = iota\n\tCascadeFail   Cascade = iota\n)\n\n// RestartOptions group options of the Restart API\ntype RestartOptions struct {\n\t// Project is the compose project used to define this app. Might be nil if user ran command just with project name\n\tProject *types.Project\n\t// Timeout override container restart timeout\n\tTimeout *time.Duration\n\t// Services passed in the command line to be restarted\n\tServices []string\n\t// NoDeps ignores services dependencies\n\tNoDeps bool\n}\n\n// StopOptions group options of the Stop API\ntype StopOptions struct {\n\t// Project is the compose project used to define this app. Might be nil if user ran command just with project name\n\tProject *types.Project\n\t// Timeout override container stop timeout\n\tTimeout *time.Duration\n\t// Services passed in the command line to be stopped\n\tServices []string\n}\n\n// UpOptions group options of the Up API\ntype UpOptions struct {\n\tCreate CreateOptions\n\tStart  StartOptions\n}\n\n// DownOptions group options of the Down API\ntype DownOptions struct {\n\t// RemoveOrphans will cleanup containers that are not declared on the compose model but own the same labels\n\tRemoveOrphans bool\n\t// Project is the compose project used to define this app. Might be nil if user ran `down` just with project name\n\tProject *types.Project\n\t// Timeout override container stop timeout\n\tTimeout *time.Duration\n\t// Images remove image used by services. 'all': Remove all images. 'local': Remove only images that don't have a tag\n\tImages string\n\t// Volumes remove volumes, both declared in the `volumes` section and anonymous ones\n\tVolumes bool\n\t// Services passed in the command line to be stopped\n\tServices []string\n}\n\n// ConfigOptions group options of the Config API\ntype ConfigOptions struct {\n\t// Format define the output format used to dump converted application model (json|yaml)\n\tFormat string\n\t// Output defines the path to save the application model\n\tOutput string\n\t// Resolve image reference to digests\n\tResolveImageDigests bool\n}\n\n// PushOptions group options of the Push API\ntype PushOptions struct {\n\tQuiet          bool\n\tIgnoreFailures bool\n\tImageMandatory bool\n}\n\n// PullOptions group options of the Pull API\ntype PullOptions struct {\n\tQuiet           bool\n\tIgnoreFailures  bool\n\tIgnoreBuildable bool\n}\n\n// ImagesOptions group options of the Images API\ntype ImagesOptions struct {\n\tServices []string\n}\n\n// KillOptions group options of the Kill API\ntype KillOptions struct {\n\t// RemoveOrphans will cleanup containers that are not declared on the compose model but own the same labels\n\tRemoveOrphans bool\n\t// Project is the compose project used to define this app. Might be nil if user ran command just with project name\n\tProject *types.Project\n\t// Services passed in the command line to be killed\n\tServices []string\n\t// Signal to send to containers\n\tSignal string\n\t// All can be set to true to try to kill all found containers, independently of their state\n\tAll bool\n}\n\n// RemoveOptions group options of the Remove API\ntype RemoveOptions struct {\n\t// Project is the compose project used to define this app. Might be nil if user ran command just with project name\n\tProject *types.Project\n\t// Stop option passed in the command line\n\tStop bool\n\t// Volumes remove anonymous volumes\n\tVolumes bool\n\t// Force don't ask to confirm removal\n\tForce bool\n\t// Services passed in the command line to be removed\n\tServices []string\n}\n\n// RunOptions group options of the Run API\ntype RunOptions struct {\n\tCreateOptions\n\t// Project is the compose project used to define this app. Might be nil if user ran command just with project name\n\tProject           *types.Project\n\tName              string\n\tService           string\n\tCommand           []string\n\tEntrypoint        []string\n\tDetach            bool\n\tAutoRemove        bool\n\tTty               bool\n\tInteractive       bool\n\tWorkingDir        string\n\tUser              string\n\tEnvironment       []string\n\tCapAdd            []string\n\tCapDrop           []string\n\tLabels            types.Labels\n\tPrivileged        bool\n\tUseNetworkAliases bool\n\tNoDeps            bool\n\t// used by exec\n\tIndex int\n}\n\n// AttachOptions group options of the Attach API\ntype AttachOptions struct {\n\tProject    *types.Project\n\tService    string\n\tIndex      int\n\tDetachKeys string\n\tNoStdin    bool\n\tProxy      bool\n}\n\n// EventsOptions group options of the Events API\ntype EventsOptions struct {\n\tServices []string\n\tConsumer func(event Event) error\n\tSince    string\n\tUntil    string\n}\n\n// Event is a container runtime event served by Events API\ntype Event struct {\n\tTimestamp  time.Time\n\tService    string\n\tContainer  string\n\tStatus     string\n\tAttributes map[string]string\n}\n\n// PortOptions group options of the Port API\ntype PortOptions struct {\n\tProtocol string\n\tIndex    int\n}\n\n// OCIVersion controls manifest generation to ensure compatibility\n// with different registries.\n//\n// Currently, this is not exposed as an option to the user – Compose uses\n// OCI 1.0 mode automatically for ECR registries based on domain and OCI 1.1\n// for all other registries.\n//\n// There are likely other popular registries that do not support the OCI 1.1\n// format, so it might make sense to expose this as a CLI flag or see if\n// there's a way to generically probe the registry for support level.\ntype OCIVersion string\n\nconst (\n\tOCIVersion1_0 OCIVersion = \"1.0\"\n\tOCIVersion1_1 OCIVersion = \"1.1\"\n)\n\n// PublishOptions group options of the Publish API\ntype PublishOptions struct {\n\tResolveImageDigests bool\n\tApplication         bool\n\tWithEnvironment     bool\n\tOCIVersion          OCIVersion\n\t// Use plain HTTP to access registry. Should only be used for testing purpose\n\tInsecureRegistry bool\n}\n\nfunc (e Event) String() string {\n\tt := e.Timestamp.Format(\"2006-01-02 15:04:05.000000\")\n\tvar attr []string\n\tfor k, v := range e.Attributes {\n\t\tattr = append(attr, fmt.Sprintf(\"%s=%s\", k, v))\n\t}\n\treturn fmt.Sprintf(\"%s container %s %s (%s)\\n\", t, e.Status, e.Container, strings.Join(attr, \", \"))\n}\n\n// ListOptions group options of the ls API\ntype ListOptions struct {\n\tAll bool\n}\n\n// PsOptions group options of the Ps API\ntype PsOptions struct {\n\tProject  *types.Project\n\tAll      bool\n\tServices []string\n}\n\n// CopyOptions group options of the cp API\ntype CopyOptions struct {\n\tSource      string\n\tDestination string\n\tAll         bool\n\tIndex       int\n\tFollowLink  bool\n\tCopyUIDGID  bool\n}\n\n// PortPublisher hold status about published port\ntype PortPublisher struct {\n\tURL           string\n\tTargetPort    int\n\tPublishedPort int\n\tProtocol      string\n}\n\n// ContainerSummary hold high-level description of a container\ntype ContainerSummary struct {\n\tID           string\n\tName         string\n\tNames        []string\n\tImage        string\n\tCommand      string\n\tProject      string\n\tService      string\n\tCreated      int64\n\tState        container.ContainerState\n\tStatus       string\n\tHealth       container.HealthStatus\n\tExitCode     int\n\tPublishers   PortPublishers\n\tLabels       map[string]string\n\tSizeRw       int64 `json:\",omitempty\"`\n\tSizeRootFs   int64 `json:\",omitempty\"`\n\tMounts       []string\n\tNetworks     []string\n\tLocalVolumes int\n}\n\n// PortPublishers is a slice of PortPublisher\ntype PortPublishers []PortPublisher\n\n// Len implements sort.Interface\nfunc (p PortPublishers) Len() int {\n\treturn len(p)\n}\n\n// Less implements sort.Interface\nfunc (p PortPublishers) Less(i, j int) bool {\n\tleft := p[i]\n\tright := p[j]\n\tif left.URL != right.URL {\n\t\treturn left.URL < right.URL\n\t}\n\tif left.TargetPort != right.TargetPort {\n\t\treturn left.TargetPort < right.TargetPort\n\t}\n\tif left.PublishedPort != right.PublishedPort {\n\t\treturn left.PublishedPort < right.PublishedPort\n\t}\n\treturn left.Protocol < right.Protocol\n}\n\n// Swap implements sort.Interface\nfunc (p PortPublishers) Swap(i, j int) {\n\tp[i], p[j] = p[j], p[i]\n}\n\n// ContainerProcSummary holds container processes top data\ntype ContainerProcSummary struct {\n\tID        string\n\tName      string\n\tProcesses [][]string\n\tTitles    []string\n\tService   string\n\tReplica   string\n}\n\n// ImageSummary holds container image description\ntype ImageSummary struct {\n\tID          string\n\tRepository  string\n\tTag         string\n\tPlatform    platforms.Platform\n\tSize        int64\n\tCreated     *time.Time\n\tLastTagTime time.Time\n}\n\n// ServiceStatus hold status about a service\ntype ServiceStatus struct {\n\tID         string\n\tName       string\n\tReplicas   int\n\tDesired    int\n\tPorts      []string\n\tPublishers []PortPublisher\n}\n\n// LogOptions defines optional parameters for the `Log` API\ntype LogOptions struct {\n\tProject    *types.Project\n\tIndex      int\n\tServices   []string\n\tTail       string\n\tSince      string\n\tUntil      string\n\tFollow     bool\n\tTimestamps bool\n}\n\n// PauseOptions group options of the Pause API\ntype PauseOptions struct {\n\t// Services passed in the command line to be started\n\tServices []string\n\t// Project is the compose project used to define this app. Might be nil if user ran command just with project name\n\tProject *types.Project\n}\n\n// ExportOptions group options of the Export API\ntype ExportOptions struct {\n\tService string\n\tIndex   int\n\tOutput  string\n}\n\n// CommitOptions group options of the Commit API\ntype CommitOptions struct {\n\tService   string\n\tReference string\n\n\tPause   bool\n\tComment string\n\tAuthor  string\n\tChanges opts.ListOpts\n\n\tIndex int\n}\n\ntype GenerateOptions struct {\n\t// ProjectName to set in the Compose file\n\tProjectName string\n\t// Containers passed in the command line to be used as reference for service definition\n\tContainers []string\n}\n\nconst (\n\t// STARTING indicates that stack is being deployed\n\tSTARTING string = \"Starting\"\n\t// RUNNING indicates that stack is deployed and services are running\n\tRUNNING string = \"Running\"\n\t// UPDATING indicates that some stack resources are being recreated\n\tUPDATING string = \"Updating\"\n\t// REMOVING indicates that stack is being deleted\n\tREMOVING string = \"Removing\"\n\t// UNKNOWN indicates unknown stack state\n\tUNKNOWN string = \"Unknown\"\n\t// FAILED indicates that stack deployment failed\n\tFAILED string = \"Failed\"\n)\n\nconst (\n\t// RecreateDiverged to recreate services which configuration diverges from compose model\n\tRecreateDiverged = \"diverged\"\n\t// RecreateForce to force service container being recreated\n\tRecreateForce = \"force\"\n\t// RecreateNever to never recreate existing service containers\n\tRecreateNever = \"never\"\n)\n\n// Stack holds the name and state of a compose application/stack\ntype Stack struct {\n\tID          string\n\tName        string\n\tStatus      string\n\tConfigFiles string\n\tReason      string\n}\n\n// LogConsumer is a callback to process log messages from services\ntype LogConsumer interface {\n\tLog(containerName, message string)\n\tErr(containerName, message string)\n\tStatus(container, msg string)\n}\n\n// ContainerEventListener is a callback to process ContainerEvent from services\ntype ContainerEventListener func(event ContainerEvent)\n\n// ContainerEvent notify an event has been collected on source container implementing Service\ntype ContainerEvent struct {\n\tType      int\n\tTime      int64\n\tContainer *ContainerSummary\n\t// Source is the name of the container _without the project prefix_.\n\t//\n\t// This is only suitable for display purposes within Compose, as it's\n\t// not guaranteed to be unique across services.\n\tSource  string\n\tID      string\n\tService string\n\tLine    string\n\t// ExitCode is only set on ContainerEventExited events\n\tExitCode   int\n\tRestarting bool\n}\n\nconst (\n\t// ContainerEventLog is a ContainerEvent of type log on stdout. Line is set\n\tContainerEventLog = iota\n\t// ContainerEventErr is a ContainerEvent of type log on stderr. Line is set\n\tContainerEventErr\n\t// ContainerEventStarted let consumer know a container has been started\n\tContainerEventStarted\n\t// ContainerEventRestarted let consumer know a container has been restarted\n\tContainerEventRestarted\n\t// ContainerEventStopped is a ContainerEvent of type stopped.\n\tContainerEventStopped\n\t// ContainerEventCreated let consumer know a new container has been created\n\tContainerEventCreated\n\t// ContainerEventRecreated let consumer know container stopped but his being replaced\n\tContainerEventRecreated\n\t// ContainerEventExited is a ContainerEvent of type exit. ExitCode is set\n\tContainerEventExited\n\t// UserCancel user canceled compose up, we are stopping containers\n\tHookEventLog\n)\n\n// Separator is used for naming components\nvar Separator = \"-\"\n\n// GetImageNameOrDefault computes the default image name for a service, used to tag built images\nfunc GetImageNameOrDefault(service types.ServiceConfig, projectName string) string {\n\timageName := service.Image\n\tif imageName == \"\" {\n\t\timageName = projectName + Separator + service.Name\n\t}\n\treturn imageName\n}\n"
  },
  {
    "path": "pkg/api/api_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage api\n\nimport (\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc TestRunOptionsEnvironmentMap(t *testing.T) {\n\topts := RunOptions{\n\t\tEnvironment: []string{\n\t\t\t\"FOO=BAR\",\n\t\t\t\"ZOT=\",\n\t\t\t\"QIX\",\n\t\t},\n\t}\n\tenv := types.NewMappingWithEquals(opts.Environment)\n\tassert.Equal(t, *env[\"FOO\"], \"BAR\")\n\tassert.Equal(t, *env[\"ZOT\"], \"\")\n\tassert.Check(t, env[\"QIX\"] == nil)\n}\n"
  },
  {
    "path": "pkg/api/context.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage api\n\n// ContextInfo provides Docker context information for advanced scenarios\ntype ContextInfo interface {\n\t// CurrentContext returns the name of the current Docker context\n\t// Returns \"default\" for simple clients without context support\n\tCurrentContext() string\n\n\t// ServerOSType returns the Docker daemon's operating system (linux/windows/darwin)\n\t// Used for OS-specific compatibility checks\n\tServerOSType() string\n\n\t// BuildKitEnabled determines whether BuildKit should be used for builds\n\t// Checks DOCKER_BUILDKIT env var, config, and daemon capabilities\n\tBuildKitEnabled() (bool, error)\n}\n"
  },
  {
    "path": "pkg/api/env.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage api\n\n// ComposeCompatibility try to mimic compose v1 as much as possible\nconst ComposeCompatibility = \"COMPOSE_COMPATIBILITY\"\n"
  },
  {
    "path": "pkg/api/errors.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage api\n\nimport (\n\t\"errors\"\n)\n\nconst (\n\t// ExitCodeLoginRequired exit code when command cannot execute because it requires cloud login\n\t// This will be used by VSCode to detect when creating context if the user needs to login first\n\tExitCodeLoginRequired = 5\n)\n\nvar (\n\t// ErrNotFound is returned when an object is not found\n\tErrNotFound = errors.New(\"not found\")\n\t// ErrAlreadyExists is returned when an object already exists\n\tErrAlreadyExists = errors.New(\"already exists\")\n\t// ErrForbidden is returned when an operation is not permitted\n\tErrForbidden = errors.New(\"forbidden\")\n\t// ErrUnknown is returned when the error type is unmapped\n\tErrUnknown = errors.New(\"unknown\")\n\t// ErrNotImplemented is returned when a backend doesn't implement an action\n\tErrNotImplemented = errors.New(\"not implemented\")\n\t// ErrUnsupportedFlag is returned when a backend doesn't support a flag\n\tErrUnsupportedFlag = errors.New(\"unsupported flag\")\n\t// ErrCanceled is returned when the command was canceled by user\n\tErrCanceled = errors.New(\"canceled\")\n\t// ErrParsingFailed is returned when a string cannot be parsed\n\tErrParsingFailed = errors.New(\"parsing failed\")\n\t// ErrNoResources is returned when operation didn't selected any resource\n\tErrNoResources = errors.New(\"no resources\")\n)\n\n// IsNotFoundError returns true if the unwrapped error is ErrNotFound\nfunc IsNotFoundError(err error) bool {\n\treturn errors.Is(err, ErrNotFound)\n}\n\n// IsAlreadyExistsError returns true if the unwrapped error is ErrAlreadyExists\nfunc IsAlreadyExistsError(err error) bool {\n\treturn errors.Is(err, ErrAlreadyExists)\n}\n\n// IsForbiddenError returns true if the unwrapped error is ErrForbidden\nfunc IsForbiddenError(err error) bool {\n\treturn errors.Is(err, ErrForbidden)\n}\n\n// IsUnknownError returns true if the unwrapped error is ErrUnknown\nfunc IsUnknownError(err error) bool {\n\treturn errors.Is(err, ErrUnknown)\n}\n\n// IsErrUnsupportedFlag returns true if the unwrapped error is ErrUnsupportedFlag\nfunc IsErrUnsupportedFlag(err error) bool {\n\treturn errors.Is(err, ErrUnsupportedFlag)\n}\n\n// IsErrNotImplemented returns true if the unwrapped error is ErrNotImplemented\nfunc IsErrNotImplemented(err error) bool {\n\treturn errors.Is(err, ErrNotImplemented)\n}\n\n// IsErrParsingFailed returns true if the unwrapped error is ErrParsingFailed\nfunc IsErrParsingFailed(err error) bool {\n\treturn errors.Is(err, ErrParsingFailed)\n}\n\n// IsErrCanceled returns true if the unwrapped error is ErrCanceled\nfunc IsErrCanceled(err error) bool {\n\treturn errors.Is(err, ErrCanceled)\n}\n"
  },
  {
    "path": "pkg/api/errors_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage api\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc TestIsNotFound(t *testing.T) {\n\terr := fmt.Errorf(`object \"name\": %w`, ErrNotFound)\n\tassert.Assert(t, IsNotFoundError(err))\n\n\tassert.Assert(t, !IsNotFoundError(errors.New(\"another error\")))\n}\n\nfunc TestIsAlreadyExists(t *testing.T) {\n\terr := fmt.Errorf(`object \"name\": %w`, ErrAlreadyExists)\n\tassert.Assert(t, IsAlreadyExistsError(err))\n\n\tassert.Assert(t, !IsAlreadyExistsError(errors.New(\"another error\")))\n}\n\nfunc TestIsForbidden(t *testing.T) {\n\terr := fmt.Errorf(`object \"name\": %w`, ErrForbidden)\n\tassert.Assert(t, IsForbiddenError(err))\n\n\tassert.Assert(t, !IsForbiddenError(errors.New(\"another error\")))\n}\n\nfunc TestIsUnknown(t *testing.T) {\n\terr := fmt.Errorf(`object \"name\": %w`, ErrUnknown)\n\tassert.Assert(t, IsUnknownError(err))\n\n\tassert.Assert(t, !IsUnknownError(errors.New(\"another error\")))\n}\n"
  },
  {
    "path": "pkg/api/event.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage api\n\nimport (\n\t\"context\"\n)\n\n// EventStatus indicates the status of an action\ntype EventStatus int\n\nconst (\n\t// Working means that the current task is working\n\tWorking EventStatus = iota\n\t// Done means that the current task is done\n\tDone\n\t// Warning means that the current task has warning\n\tWarning\n\t// Error means that the current task has errored\n\tError\n)\n\n// ResourceCompose is a special resource ID used when event applies to all resources in the application\nconst ResourceCompose = \"Compose\"\n\nconst (\n\tStatusError            = \"Error\"\n\tStatusCreating         = \"Creating\"\n\tStatusStarting         = \"Starting\"\n\tStatusStarted          = \"Started\"\n\tStatusWaiting          = \"Waiting\"\n\tStatusHealthy          = \"Healthy\"\n\tStatusExited           = \"Exited\"\n\tStatusRestarting       = \"Restarting\"\n\tStatusRestarted        = \"Restarted\"\n\tStatusRunning          = \"Running\"\n\tStatusCreated          = \"Created\"\n\tStatusStopping         = \"Stopping\"\n\tStatusStopped          = \"Stopped\"\n\tStatusKilling          = \"Killing\"\n\tStatusKilled           = \"Killed\"\n\tStatusRemoving         = \"Removing\"\n\tStatusRemoved          = \"Removed\"\n\tStatusBuilding         = \"Building\"\n\tStatusBuilt            = \"Built\"\n\tStatusPulling          = \"Pulling\"\n\tStatusPulled           = \"Pulled\"\n\tStatusCommitting       = \"Committing\"\n\tStatusCommitted        = \"Committed\"\n\tStatusCopying          = \"Copying\"\n\tStatusCopied           = \"Copied\"\n\tStatusExporting        = \"Exporting\"\n\tStatusExported         = \"Exported\"\n\tStatusDownloading      = \"Downloading\"\n\tStatusDownloadComplete = \"Download complete\"\n\tStatusConfiguring      = \"Configuring\"\n\tStatusConfigured       = \"Configured\"\n)\n\n// Resource represents status change and progress for a compose resource.\ntype Resource struct {\n\tID       string\n\tParentID string\n\tText     string\n\tDetails  string\n\tStatus   EventStatus\n\tCurrent  int64\n\tPercent  int\n\tTotal    int64\n}\n\nfunc (e *Resource) StatusText() string {\n\tswitch e.Status {\n\tcase Working:\n\t\treturn \"Working\"\n\tcase Warning:\n\t\treturn \"Warning\"\n\tcase Done:\n\t\treturn \"Done\"\n\tdefault:\n\t\treturn \"Error\"\n\t}\n}\n\n// EventProcessor is notified about Compose operations and tasks\ntype EventProcessor interface {\n\t// Start is triggered as a Compose operation is starting with context\n\tStart(ctx context.Context, operation string)\n\t// On notify about (sub)task and progress processing operation\n\tOn(events ...Resource)\n\t// Done is triggered as a Compose operation completed\n\tDone(operation string, success bool)\n}\n"
  },
  {
    "path": "pkg/api/labels.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage api\n\nimport (\n\t\"github.com/hashicorp/go-version\"\n\n\t\"github.com/docker/compose/v5/internal\"\n)\n\nconst (\n\t// ProjectLabel allow to track resource related to a compose project\n\tProjectLabel = \"com.docker.compose.project\"\n\t// ServiceLabel allow to track resource related to a compose service\n\tServiceLabel = \"com.docker.compose.service\"\n\t// ConfigHashLabel stores configuration hash for a compose service\n\tConfigHashLabel = \"com.docker.compose.config-hash\"\n\t// ContainerNumberLabel stores the container index of a replicated service\n\tContainerNumberLabel = \"com.docker.compose.container-number\"\n\t// VolumeLabel allow to track resource related to a compose volume\n\tVolumeLabel = \"com.docker.compose.volume\"\n\t// NetworkLabel allow to track resource related to a compose network\n\tNetworkLabel = \"com.docker.compose.network\"\n\t// WorkingDirLabel stores absolute path to compose project working directory\n\tWorkingDirLabel = \"com.docker.compose.project.working_dir\"\n\t// ConfigFilesLabel stores absolute path to compose project configuration files\n\tConfigFilesLabel = \"com.docker.compose.project.config_files\"\n\t// EnvironmentFileLabel stores absolute path to compose project env file set by `--env-file`\n\tEnvironmentFileLabel = \"com.docker.compose.project.environment_file\"\n\t// OneoffLabel stores value 'True' for one-off containers created by `compose run`\n\tOneoffLabel = \"com.docker.compose.oneoff\"\n\t// SlugLabel stores unique slug used for one-off container identity\n\tSlugLabel = \"com.docker.compose.slug\"\n\t// ImageDigestLabel stores digest of the container image used to run service\n\tImageDigestLabel = \"com.docker.compose.image\"\n\t// DependenciesLabel stores service dependencies\n\tDependenciesLabel = \"com.docker.compose.depends_on\"\n\t// VersionLabel stores the compose tool version used to build/run application\n\tVersionLabel = \"com.docker.compose.version\"\n\t// ImageBuilderLabel stores the builder (classic or BuildKit) used to produce the image.\n\tImageBuilderLabel = \"com.docker.compose.image.builder\"\n\t// ContainerReplaceLabel is set when container is created to replace another container (recreated)\n\tContainerReplaceLabel = \"com.docker.compose.replace\"\n)\n\n// ComposeVersion is the compose tool version as declared by label VersionLabel\nvar ComposeVersion string\n\nfunc init() {\n\tv, err := version.NewVersion(internal.Version)\n\tif err == nil {\n\t\tComposeVersion = v.Core().String()\n\t}\n}\n"
  },
  {
    "path": "pkg/api/labels_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage api\n\nimport (\n\t\"testing\"\n\n\t\"github.com/hashicorp/go-version\"\n\t\"gotest.tools/v3/assert\"\n\n\t\"github.com/docker/compose/v5/internal\"\n)\n\nfunc TestComposeVersionInitialization(t *testing.T) {\n\tv, err := version.NewVersion(internal.Version)\n\tif err != nil {\n\t\tassert.Equal(t, \"\", ComposeVersion, \"ComposeVersion should be empty for a non-semver internal version (e.g. 'devel')\")\n\t} else {\n\t\texpected := v.Core().String()\n\t\tassert.Equal(t, expected, ComposeVersion, \"ComposeVersion should be the core of internal.Version\")\n\t}\n}\n"
  },
  {
    "path": "pkg/bridge/convert.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage bridge\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"os/user\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strconv\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/errdefs\"\n\t\"github.com/docker/cli/cli/command\"\n\tcli \"github.com/docker/cli/cli/command/container\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/image\"\n\t\"github.com/moby/moby/api/types/network\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/moby/moby/client/pkg/jsonmessage\"\n\t\"github.com/sirupsen/logrus\"\n\t\"go.yaml.in/yaml/v4\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\ntype ConvertOptions struct {\n\tOutput          string\n\tTemplates       string\n\tTransformations []string\n}\n\nfunc Convert(ctx context.Context, dockerCli command.Cli, project *types.Project, opts ConvertOptions) error {\n\tif len(opts.Transformations) == 0 {\n\t\topts.Transformations = []string{DefaultTransformerImage}\n\t}\n\t// Load image references, secrets and configs, also expose ports\n\tproject, err := LoadAdditionalResources(ctx, dockerCli, project)\n\tif err != nil {\n\t\treturn err\n\t}\n\t// for user to rely on compose.yaml attribute names, not go struct ones, we marshall back into YAML\n\traw, err := project.MarshalYAML(types.WithSecretContent)\n\t// Marshall to YAML\n\tif err != nil {\n\t\treturn fmt.Errorf(\"cannot render project into yaml: %w\", err)\n\t}\n\tvar model map[string]any\n\terr = yaml.Unmarshal(raw, &model)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"cannot render project into yaml: %w\", err)\n\t}\n\n\tif opts.Output != \"\" {\n\t\t_ = os.RemoveAll(opts.Output)\n\t\terr := os.MkdirAll(opts.Output, 0o744)\n\t\tif err != nil && !os.IsExist(err) {\n\t\t\treturn fmt.Errorf(\"cannot create output folder: %w\", err)\n\t\t}\n\t}\n\t// Run Transformers images\n\treturn convert(ctx, dockerCli, model, opts)\n}\n\nfunc convert(ctx context.Context, dockerCli command.Cli, model map[string]any, opts ConvertOptions) error {\n\traw, err := yaml.Marshal(model)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tdir, err := os.MkdirTemp(\"\", \"compose-convert-*\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer func() {\n\t\terr := os.RemoveAll(dir)\n\t\tif err != nil {\n\t\t\tlogrus.Warnf(\"failed to remove temp dir %s: %v\", dir, err)\n\t\t}\n\t}()\n\n\tcomposeYaml := filepath.Join(dir, \"compose.yaml\")\n\terr = os.WriteFile(composeYaml, raw, 0o600)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tout, err := filepath.Abs(opts.Output)\n\tif err != nil {\n\t\treturn err\n\t}\n\tbinds := []string{\n\t\tfmt.Sprintf(\"%s:%s\", dir, \"/in\"),\n\t\tfmt.Sprintf(\"%s:%s\", out, \"/out\"),\n\t}\n\tif opts.Templates != \"\" {\n\t\ttemplateDir, err := filepath.Abs(opts.Templates)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tbinds = append(binds, fmt.Sprintf(\"%s:%s\", templateDir, \"/templates\"))\n\t}\n\n\tfor _, transformation := range opts.Transformations {\n\t\t_, err = inspectWithPull(ctx, dockerCli, transformation)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tcontainerConfig := &container.Config{\n\t\t\tImage: transformation,\n\t\t\tEnv:   []string{\"LICENSE_AGREEMENT=true\"},\n\t\t}\n\t\t// On POSIX systems, this is a decimal number representing the uid.\n\t\t// On Windows, this is a security identifier (SID) in a string format and the engine isn't able to manage it\n\t\tif runtime.GOOS != \"windows\" {\n\t\t\tusr, err := user.Current()\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcontainerConfig.User = usr.Uid\n\t\t}\n\t\tcreated, err := dockerCli.Client().ContainerCreate(ctx, client.ContainerCreateOptions{\n\t\t\tConfig: containerConfig,\n\t\t\tHostConfig: &container.HostConfig{\n\t\t\t\tBinds:      binds,\n\t\t\t\tAutoRemove: true,\n\t\t\t},\n\t\t\tNetworkingConfig: &network.NetworkingConfig{},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\terr = cli.RunStart(ctx, dockerCli, &cli.StartOptions{\n\t\t\tAttach:     true,\n\t\t\tContainers: []string{created.ID},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// LoadAdditionalResources loads additional resources from the project, such as image references, secrets, configs and exposed ports\nfunc LoadAdditionalResources(ctx context.Context, dockerCLI command.Cli, project *types.Project) (*types.Project, error) {\n\tfor name, service := range project.Services {\n\t\timageName := api.GetImageNameOrDefault(service, project.Name)\n\n\t\tinspect, err := inspectWithPull(ctx, dockerCLI, imageName)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tservice.Image = imageName\n\t\texposed := utils.Set[string]{}\n\t\texposed.AddAll(service.Expose...)\n\t\tfor port := range inspect.Config.ExposedPorts {\n\t\t\tp, err := network.ParsePort(port)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\texposed.Add(strconv.Itoa(int(p.Num())))\n\t\t}\n\t\tfor _, port := range service.Ports {\n\t\t\texposed.Add(strconv.Itoa(int(port.Target)))\n\t\t}\n\t\tservice.Expose = exposed.Elements()\n\t\tproject.Services[name] = service\n\t}\n\n\tfor name, secret := range project.Secrets {\n\t\tf, err := loadFileObject(types.FileObjectConfig(secret))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tproject.Secrets[name] = types.SecretConfig(f)\n\t}\n\n\tfor name, config := range project.Configs {\n\t\tf, err := loadFileObject(types.FileObjectConfig(config))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tproject.Configs[name] = types.ConfigObjConfig(f)\n\t}\n\n\treturn project, nil\n}\n\nfunc loadFileObject(conf types.FileObjectConfig) (types.FileObjectConfig, error) {\n\tif !conf.External {\n\t\tswitch {\n\t\tcase conf.Environment != \"\":\n\t\t\tconf.Content = os.Getenv(conf.Environment)\n\t\tcase conf.File != \"\":\n\t\t\tbytes, err := os.ReadFile(conf.File)\n\t\t\tif err != nil {\n\t\t\t\treturn conf, err\n\t\t\t}\n\t\t\tconf.Content = string(bytes)\n\t\t}\n\t}\n\treturn conf, nil\n}\n\nfunc inspectWithPull(ctx context.Context, dockerCli command.Cli, imageName string) (image.InspectResponse, error) {\n\tinspect, err := dockerCli.Client().ImageInspect(ctx, imageName)\n\tif errdefs.IsNotFound(err) {\n\t\tvar stream io.ReadCloser\n\t\tstream, err = dockerCli.Client().ImagePull(ctx, imageName, client.ImagePullOptions{})\n\t\tif err != nil {\n\t\t\treturn image.InspectResponse{}, err\n\t\t}\n\t\tdefer func() { _ = stream.Close() }()\n\n\t\tout := dockerCli.Out()\n\t\terr = jsonmessage.DisplayJSONMessagesStream(stream, out, out.FD(), out.IsTerminal(), nil)\n\t\tif err != nil {\n\t\t\treturn image.InspectResponse{}, err\n\t\t}\n\t\tif inspect, err = dockerCli.Client().ImageInspect(ctx, imageName); err != nil {\n\t\t\treturn image.InspectResponse{}, err\n\t\t}\n\t}\n\treturn inspect.InspectResponse, err\n}\n"
  },
  {
    "path": "pkg/bridge/transformers.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage bridge\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/moby/go-archive\"\n\t\"github.com/moby/moby/api/types/image\"\n\t\"github.com/moby/moby/client\"\n)\n\nconst (\n\tTransformerLabel        = \"com.docker.compose.bridge\"\n\tDefaultTransformerImage = \"docker/compose-bridge-kubernetes\"\n\n\ttemplatesPath = \"/templates\"\n)\n\ntype CreateTransformerOptions struct {\n\tDest string\n\tFrom string\n}\n\nfunc CreateTransformer(ctx context.Context, dockerCli command.Cli, options CreateTransformerOptions) error {\n\tif options.From == \"\" {\n\t\toptions.From = DefaultTransformerImage\n\t}\n\tout, err := filepath.Abs(options.Dest)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif _, err := os.Stat(out); err == nil {\n\t\treturn fmt.Errorf(\"output folder %s already exists\", out)\n\t}\n\n\ttmpl := filepath.Join(out, \"templates\")\n\terr = os.MkdirAll(tmpl, 0o744)\n\tif err != nil && !os.IsExist(err) {\n\t\treturn fmt.Errorf(\"cannot create output folder: %w\", err)\n\t}\n\n\tif err := command.ValidateOutputPath(out); err != nil {\n\t\treturn err\n\t}\n\n\tcreated, err := dockerCli.Client().ContainerCreate(ctx, client.ContainerCreateOptions{\n\t\tImage: options.From,\n\t})\n\n\tdefer func() {\n\t\t_, _ = dockerCli.Client().ContainerRemove(context.Background(), created.ID, client.ContainerRemoveOptions{\n\t\t\tForce: true,\n\t\t})\n\t}()\n\n\tif err != nil {\n\t\treturn err\n\t}\n\tresp, err := dockerCli.Client().CopyFromContainer(ctx, created.ID, client.CopyFromContainerOptions{\n\t\tSourcePath: templatesPath,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer func() {\n\t\t_ = resp.Content.Close()\n\t}()\n\n\tsrcInfo := archive.CopyInfo{\n\t\tPath:   templatesPath,\n\t\tExists: true,\n\t\tIsDir:  resp.Stat.Mode.IsDir(),\n\t}\n\n\tpreArchive := resp.Content\n\tif srcInfo.RebaseName != \"\" {\n\t\t_, srcBase := archive.SplitPathDirEntry(srcInfo.Path)\n\t\tpreArchive = archive.RebaseArchiveEntries(resp.Content, srcBase, srcInfo.RebaseName)\n\t}\n\n\tif err := archive.CopyTo(preArchive, srcInfo, out); err != nil {\n\t\treturn err\n\t}\n\n\tdockerfile := `FROM docker/compose-bridge-transformer\nLABEL com.docker.compose.bridge=transformation\nCOPY templates /templates\n`\n\tif err := os.WriteFile(filepath.Join(out, \"Dockerfile\"), []byte(dockerfile), 0o700); err != nil {\n\t\treturn err\n\t}\n\t_, err = fmt.Fprintf(dockerCli.Out(), \"Transformer created in %q\\n\", out)\n\treturn err\n}\n\nfunc ListTransformers(ctx context.Context, dockerCli command.Cli) ([]image.Summary, error) {\n\tres, err := dockerCli.Client().ImageList(ctx, client.ImageListOptions{\n\t\tFilters: make(client.Filters).Add(\"label\", fmt.Sprintf(\"%s=%s\", TransformerLabel, \"transformation\")),\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn res.Items, nil\n}\n"
  },
  {
    "path": "pkg/compose/apiSocket.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/config/configfile\"\n)\n\n// --use-api-socket is not actually supported by the Docker Engine\n// but is a client-side hack (see https://github.com/docker/cli/blob/master/cli/command/container/create.go#L246)\n// we replicate here by transforming the project model\n\nfunc (s *composeService) useAPISocket(project *types.Project) (*types.Project, error) {\n\tuseAPISocket := false\n\tfor _, service := range project.Services {\n\t\tif service.UseAPISocket {\n\t\t\tuseAPISocket = true\n\t\t\tbreak\n\t\t}\n\t}\n\tif !useAPISocket {\n\t\treturn project, nil\n\t}\n\n\tif s.getContextInfo().ServerOSType() == \"windows\" {\n\t\treturn nil, errors.New(\"use_api_socket can't be used with a Windows Docker Engine\")\n\t}\n\n\tcreds, err := s.configFile().GetAllCredentials()\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"resolving credentials failed: %w\", err)\n\t}\n\n\tnewConfig := &configfile.ConfigFile{\n\t\tAuthConfigs: creds,\n\t}\n\tvar configBuf bytes.Buffer\n\tif err := newConfig.SaveToWriter(&configBuf); err != nil {\n\t\treturn nil, fmt.Errorf(\"saving creds for API socket: %w\", err)\n\t}\n\n\tproject.Configs[\"#apisocket\"] = types.ConfigObjConfig{\n\t\tContent: configBuf.String(),\n\t}\n\n\tfor name, service := range project.Services {\n\t\tif !service.UseAPISocket {\n\t\t\tcontinue\n\t\t}\n\t\tservice.Volumes = append(service.Volumes, types.ServiceVolumeConfig{\n\t\t\tType:   types.VolumeTypeBind,\n\t\t\tSource: \"/var/run/docker.sock\",\n\t\t\tTarget: \"/var/run/docker.sock\",\n\t\t})\n\n\t\t_, envvarPresent := service.Environment[\"DOCKER_CONFIG\"]\n\n\t\t// If the DOCKER_CONFIG env var is already present, we assume the client knows\n\t\t// what they're doing and don't inject the creds.\n\t\tif !envvarPresent {\n\t\t\t// Set our special little location for the config file.\n\t\t\tpath := \"/run/secrets/docker\"\n\t\t\tservice.Environment[\"DOCKER_CONFIG\"] = &path\n\t\t}\n\n\t\tservice.Configs = append(service.Configs, types.ServiceConfigObjConfig{\n\t\t\tSource: \"#apisocket\",\n\t\t\tTarget: \"/run/secrets/docker/config.json\",\n\t\t})\n\t\tproject.Services[name] = service\n\t}\n\treturn project, nil\n}\n"
  },
  {
    "path": "pkg/compose/api_versions.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\n// Docker Engine API version constants.\n// These versions correspond to specific Docker Engine releases and their features.\nconst (\n\t// apiVersion148 represents Docker Engine API version 1.48 (Engine v28.0).\n\t//\n\t// New features in this version:\n\t//  - Volume mounts with type=image support\n\t//\n\t// Before this version:\n\t//  - Only bind, volume, and tmpfs mount types were supported\n\tapiVersion148 = \"1.48\"\n\n\t// apiVersion149 represents Docker Engine API version 1.49 (Engine v28.1).\n\t//\n\t// New features in this version:\n\t//  - Network interface_name configuration\n\t//  - Platform parameter in ImageList API\n\t//\n\t// Before this version:\n\t//  - interface_name was not configurable\n\t//  - ImageList didn't support platform filtering\n\tapiVersion149 = \"1.49\"\n)\n\n// Docker Engine version strings for user-facing error messages.\n// These should be used in error messages to provide clear version requirements.\nconst (\n\t// dockerEngineV28 is the major version string for Docker Engine 28.x\n\tdockerEngineV28 = \"v28\"\n\n\t// DockerEngineV28_1 is the specific version string for Docker Engine 28.1\n\tDockerEngineV28_1 = \"v28.1\"\n)\n\n// Build tool version constants\nconst (\n\t// buildxMinVersion is the minimum required version of buildx for compose build\n\tbuildxMinVersion = \"0.17.0\"\n)\n"
  },
  {
    "path": "pkg/compose/attach.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/moby/moby/api/pkg/stdcopy\"\n\tcontainerType \"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/sirupsen/logrus\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\nfunc (s *composeService) attach(ctx context.Context, project *types.Project, listener api.ContainerEventListener, selectedServices []string) (Containers, error) {\n\tcontainers, err := s.getContainers(ctx, project.Name, oneOffExclude, true, selectedServices...)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif len(containers) == 0 {\n\t\treturn containers, nil\n\t}\n\n\tcontainers.sorted() // This enforces predictable colors assignment\n\n\tvar names []string\n\tfor _, c := range containers {\n\t\tnames = append(names, getContainerNameWithoutProject(c))\n\t}\n\n\t_, err = fmt.Fprintf(s.stdout(), \"Attaching to %s\\n\", strings.Join(names, \", \"))\n\tif err != nil {\n\t\tlogrus.Debugf(\"failed to write attach message: %v\", err)\n\t}\n\n\tfor _, ctr := range containers {\n\t\terr := s.attachContainer(ctx, ctr, listener)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn containers, nil\n}\n\nfunc (s *composeService) attachContainer(ctx context.Context, container containerType.Summary, listener api.ContainerEventListener) error {\n\tservice := container.Labels[api.ServiceLabel]\n\tname := getContainerNameWithoutProject(container)\n\treturn s.doAttachContainer(ctx, service, container.ID, name, listener)\n}\n\nfunc (s *composeService) doAttachContainer(ctx context.Context, service, id, name string, listener api.ContainerEventListener) error {\n\tinspect, err := s.apiClient().ContainerInspect(ctx, id, client.ContainerInspectOptions{})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\twOut := utils.GetWriter(func(line string) {\n\t\tlistener(api.ContainerEvent{\n\t\t\tType:    api.ContainerEventLog,\n\t\t\tSource:  name,\n\t\t\tID:      id,\n\t\t\tService: service,\n\t\t\tLine:    line,\n\t\t})\n\t})\n\twErr := utils.GetWriter(func(line string) {\n\t\tlistener(api.ContainerEvent{\n\t\t\tType:    api.ContainerEventErr,\n\t\t\tSource:  name,\n\t\t\tID:      id,\n\t\t\tService: service,\n\t\t\tLine:    line,\n\t\t})\n\t})\n\n\terr = s.attachContainerStreams(ctx, id, inspect.Container.Config.Tty, wOut, wErr)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc (s *composeService) attachContainerStreams(ctx context.Context, container string, tty bool, stdout, stderr io.WriteCloser) error {\n\tstreamOut, err := s.getContainerStreams(ctx, container)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif stdout != nil {\n\t\tgo func() {\n\t\t\tdefer func() {\n\t\t\t\tif err := stdout.Close(); err != nil {\n\t\t\t\t\tlogrus.Debugf(\"failed to close stdout: %v\", err)\n\t\t\t\t}\n\t\t\t\tif err := stderr.Close(); err != nil {\n\t\t\t\t\tlogrus.Debugf(\"failed to close stderr: %v\", err)\n\t\t\t\t}\n\t\t\t\tif err := streamOut.Close(); err != nil {\n\t\t\t\t\tlogrus.Debugf(\"failed to close stream output: %v\", err)\n\t\t\t\t}\n\t\t\t}()\n\n\t\t\tvar err error\n\t\t\tif tty {\n\t\t\t\t_, err = io.Copy(stdout, streamOut)\n\t\t\t} else {\n\t\t\t\t_, err = stdcopy.StdCopy(stdout, stderr, streamOut)\n\t\t\t}\n\t\t\tif err != nil && !errors.Is(err, io.EOF) {\n\t\t\t\tlogrus.Debugf(\"stream copy error for container %s: %v\", container, err)\n\t\t\t}\n\t\t}()\n\t}\n\treturn nil\n}\n\nfunc (s *composeService) getContainerStreams(ctx context.Context, container string) (io.ReadCloser, error) {\n\tcnx, err := s.apiClient().ContainerAttach(ctx, container, client.ContainerAttachOptions{\n\t\tStream: true,\n\t\tStdin:  false,\n\t\tStdout: true,\n\t\tStderr: true,\n\t\tLogs:   false,\n\t})\n\tif err == nil {\n\t\tstdout := ContainerStdout{HijackedResponse: cnx.HijackedResponse}\n\t\treturn stdout, nil\n\t}\n\n\t// Fallback to logs API\n\tlogs, err := s.apiClient().ContainerLogs(ctx, container, client.ContainerLogsOptions{\n\t\tShowStdout: true,\n\t\tShowStderr: true,\n\t\tFollow:     true,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn logs, nil\n}\n"
  },
  {
    "path": "pkg/compose/attach_service.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"strings\"\n\n\t\"github.com/docker/cli/cli/command/container\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Attach(ctx context.Context, projectName string, options api.AttachOptions) error {\n\tprojectName = strings.ToLower(projectName)\n\ttarget, err := s.getSpecifiedContainer(ctx, projectName, oneOffInclude, false, options.Service, options.Index)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tdetachKeys := options.DetachKeys\n\tif detachKeys == \"\" {\n\t\tdetachKeys = s.configFile().DetachKeys\n\t}\n\n\tvar attach container.AttachOptions\n\tattach.DetachKeys = detachKeys\n\tattach.NoStdin = options.NoStdin\n\tattach.Proxy = options.Proxy\n\treturn container.RunAttach(ctx, s.dockerCli, target.ID, &attach)\n}\n"
  },
  {
    "path": "pkg/compose/build.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/platforms\"\n\tspecs \"github.com/opencontainers/image-spec/specs-go/v1\"\n\t\"github.com/sirupsen/logrus\"\n\n\t\"github.com/docker/compose/v5/internal/tracing\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\nfunc (s *composeService) Build(ctx context.Context, project *types.Project, options api.BuildOptions) error {\n\terr := options.Apply(project)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn tracing.SpanWrapFunc(\"project/build\", tracing.ProjectOptions(ctx, project),\n\t\t\tfunc(ctx context.Context) error {\n\t\t\t\tbuiltImages, err := s.build(ctx, project, options, nil)\n\t\t\t\tif err == nil && len(builtImages) == 0 {\n\t\t\t\t\tlogrus.Warn(\"No services to build\")\n\t\t\t\t}\n\t\t\t\treturn err\n\t\t\t})(ctx)\n\t}, \"build\", s.events)\n}\n\nfunc (s *composeService) build(ctx context.Context, project *types.Project, options api.BuildOptions, localImages map[string]api.ImageSummary) (map[string]string, error) {\n\timageIDs := map[string]string{}\n\tserviceToBuild := types.Services{}\n\n\tvar policy types.DependencyOption = types.IgnoreDependencies\n\tif options.Deps {\n\t\tpolicy = types.IncludeDependencies\n\t}\n\n\tif len(options.Services) == 0 {\n\t\toptions.Services = project.ServiceNames()\n\t}\n\n\t// also include services used as additional_contexts with service: prefix\n\toptions.Services = addBuildDependencies(options.Services, project)\n\n\t// Some build dependencies we just introduced may not be enabled\n\tvar err error\n\tproject, err = project.WithServicesEnabled(options.Services...)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tproject, err = project.WithSelectedServices(options.Services)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\terr = project.ForEachService(options.Services, func(serviceName string, service *types.ServiceConfig) error {\n\t\tif service.Build == nil {\n\t\t\treturn nil\n\t\t}\n\t\timage := api.GetImageNameOrDefault(*service, project.Name)\n\t\t_, localImagePresent := localImages[image]\n\t\tif localImagePresent && service.PullPolicy != types.PullPolicyBuild {\n\t\t\treturn nil\n\t\t}\n\t\tserviceToBuild[serviceName] = *service\n\t\treturn nil\n\t}, policy)\n\tif err != nil {\n\t\treturn imageIDs, err\n\t}\n\n\tif len(serviceToBuild) == 0 {\n\t\treturn imageIDs, nil\n\t}\n\n\tbake, err := buildWithBake(s.dockerCli)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif bake {\n\t\treturn s.doBuildBake(ctx, project, serviceToBuild, options)\n\t}\n\treturn s.doBuildClassic(ctx, project, serviceToBuild, options)\n}\n\nfunc (s *composeService) ensureImagesExists(ctx context.Context, project *types.Project, buildOpts *api.BuildOptions, quietPull bool) error {\n\tfor name, service := range project.Services {\n\t\tif service.Provider == nil && service.Image == \"\" && service.Build == nil {\n\t\t\treturn fmt.Errorf(\"invalid service %q. Must specify either image or build\", name)\n\t\t}\n\t}\n\n\timages, err := s.getLocalImagesDigests(ctx, project)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = tracing.SpanWrapFunc(\"project/pull\", tracing.ProjectOptions(ctx, project),\n\t\tfunc(ctx context.Context) error {\n\t\t\treturn s.pullRequiredImages(ctx, project, images, quietPull)\n\t\t},\n\t)(ctx)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif buildOpts != nil {\n\t\terr = tracing.SpanWrapFunc(\"project/build\", tracing.ProjectOptions(ctx, project),\n\t\t\tfunc(ctx context.Context) error {\n\t\t\t\tbuiltImages, err := s.build(ctx, project, *buildOpts, images)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\n\t\t\t\tfor name, digest := range builtImages {\n\t\t\t\t\timages[name] = api.ImageSummary{\n\t\t\t\t\t\tRepository:  name,\n\t\t\t\t\t\tID:          digest,\n\t\t\t\t\t\tLastTagTime: time.Now(),\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn nil\n\t\t\t},\n\t\t)(ctx)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// set digest as com.docker.compose.image label so we can detect outdated containers\n\tfor name, service := range project.Services {\n\t\timage := api.GetImageNameOrDefault(service, project.Name)\n\t\timg, ok := images[image]\n\t\tif ok {\n\t\t\tservice.CustomLabels.Add(api.ImageDigestLabel, img.ID)\n\t\t}\n\n\t\tresolveImageVolumes(&service, images, project.Name)\n\n\t\tproject.Services[name] = service\n\t}\n\treturn nil\n}\n\nfunc resolveImageVolumes(service *types.ServiceConfig, images map[string]api.ImageSummary, projectName string) {\n\tfor i, vol := range service.Volumes {\n\t\tif vol.Type == types.VolumeTypeImage {\n\t\t\timgName := vol.Source\n\t\t\tif _, ok := images[vol.Source]; !ok {\n\t\t\t\t// check if source is another service in the project\n\t\t\t\timgName = api.GetImageNameOrDefault(types.ServiceConfig{Name: vol.Source}, projectName)\n\t\t\t\t// If we still can't find it, it might be an external image that wasn't pulled yet or doesn't exist\n\t\t\t\tif _, ok := images[imgName]; !ok {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t}\n\t\t\tif img, ok := images[imgName]; ok {\n\t\t\t\t// Use Image ID directly as source.\n\t\t\t\t// Using name@digest format (via reference.WithDigest) fails for local-only images\n\t\t\t\t// that don't have RepoDigests (e.g. built locally in CI).\n\t\t\t\t// Image ID (sha256:...) is always valid and ensures ServiceHash changes on rebuild.\n\t\t\t\tservice.Volumes[i].Source = img.ID\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc (s *composeService) getLocalImagesDigests(ctx context.Context, project *types.Project) (map[string]api.ImageSummary, error) {\n\timageNames := utils.Set[string]{}\n\tfor _, s := range project.Services {\n\t\timageNames.Add(api.GetImageNameOrDefault(s, project.Name))\n\t\tfor _, volume := range s.Volumes {\n\t\t\tif volume.Type == types.VolumeTypeImage {\n\t\t\t\timageNames.Add(volume.Source)\n\t\t\t}\n\t\t}\n\t}\n\timgs, err := s.getImageSummaries(ctx, imageNames.Elements())\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tfor i, service := range project.Services {\n\t\timgName := api.GetImageNameOrDefault(service, project.Name)\n\t\timg, ok := imgs[imgName]\n\t\tif !ok {\n\t\t\tcontinue\n\t\t}\n\t\tif service.Platform != \"\" {\n\t\t\tplatform, err := platforms.Parse(service.Platform)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tinspect, err := s.apiClient().ImageInspect(ctx, img.ID)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tactual := specs.Platform{\n\t\t\t\tArchitecture: inspect.Architecture,\n\t\t\t\tOS:           inspect.Os,\n\t\t\t\tVariant:      inspect.Variant,\n\t\t\t}\n\t\t\tif !platforms.NewMatcher(platform).Match(actual) {\n\t\t\t\tlogrus.Debugf(\"local image %s doesn't match expected platform %s\", service.Image, service.Platform)\n\t\t\t\t// there is a local image, but it's for the wrong platform, so\n\t\t\t\t// pretend it doesn't exist so that we can pull/build an image\n\t\t\t\t// for the correct platform instead\n\t\t\t\tdelete(imgs, imgName)\n\t\t\t}\n\t\t}\n\n\t\tproject.Services[i].CustomLabels.Add(api.ImageDigestLabel, img.ID)\n\n\t}\n\n\treturn imgs, nil\n}\n\n// resolveAndMergeBuildArgs returns the final set of build arguments to use for the service image build.\n//\n// First, args directly defined via `build.args` in YAML are considered.\n// Then, any explicitly passed args in opts (e.g. via `--build-arg` on the CLI) are merged, overwriting any\n// keys that already exist.\n// Next, any keys without a value are resolved using the project environment.\n//\n// Finally, standard proxy variables based on the Docker client configuration are added, but will not overwrite\n// any values if already present.\nfunc resolveAndMergeBuildArgs(proxyConfig map[string]string, project *types.Project, service types.ServiceConfig, opts api.BuildOptions) types.MappingWithEquals {\n\tresult := make(types.MappingWithEquals).\n\t\tOverrideBy(service.Build.Args).\n\t\tOverrideBy(opts.Args).\n\t\tResolve(envResolver(project.Environment))\n\n\t// proxy arguments do NOT override and should NOT have env resolution applied,\n\t// so they're handled last\n\tfor k, v := range proxyConfig {\n\t\tif _, ok := result[k]; !ok {\n\t\t\tv := v\n\t\t\tresult[k] = &v\n\t\t}\n\t}\n\treturn result\n}\n\nfunc getImageBuildLabels(project *types.Project, service types.ServiceConfig) types.Labels {\n\tret := make(types.Labels)\n\tif service.Build != nil {\n\t\tfor k, v := range service.Build.Labels {\n\t\t\tret.Add(k, v)\n\t\t}\n\t}\n\n\tret.Add(api.VersionLabel, api.ComposeVersion)\n\tret.Add(api.ProjectLabel, project.Name)\n\tret.Add(api.ServiceLabel, service.Name)\n\treturn ret\n}\n\nfunc addBuildDependencies(services []string, project *types.Project) []string {\n\tservicesWithDependencies := utils.NewSet(services...)\n\tfor _, service := range services {\n\t\ts, ok := project.Services[service]\n\t\tif !ok {\n\t\t\ts = project.DisabledServices[service]\n\t\t}\n\t\tb := s.Build\n\t\tif b != nil {\n\t\t\tfor _, target := range b.AdditionalContexts {\n\t\t\t\tif s, found := strings.CutPrefix(target, types.ServicePrefix); found {\n\t\t\t\t\tservicesWithDependencies.Add(s)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tif len(servicesWithDependencies) > len(services) {\n\t\treturn addBuildDependencies(servicesWithDependencies.Elements(), project)\n\t}\n\treturn servicesWithDependencies.Elements()\n}\n"
  },
  {
    "path": "pkg/compose/build_bake.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/sha1\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\t\"slices\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/console\"\n\t\"github.com/containerd/errdefs\"\n\t\"github.com/docker/cli/cli-plugins/manager\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/docker/cli/cli/command/image/build\"\n\t\"github.com/docker/cli/cli/streams\"\n\t\"github.com/google/uuid\"\n\t\"github.com/moby/buildkit/client\"\n\tgitutil \"github.com/moby/buildkit/frontend/dockerfile/dfgitutil\"\n\t\"github.com/moby/buildkit/util/progress/progressui\"\n\t\"github.com/moby/moby/client/pkg/versions\"\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/spf13/cobra\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc buildWithBake(dockerCli command.Cli) (bool, error) {\n\tenabled, err := dockerCli.BuildKitEnabled()\n\tif err != nil {\n\t\treturn false, err\n\t}\n\tif !enabled {\n\t\treturn false, nil\n\t}\n\n\t_, err = manager.GetPlugin(\"buildx\", dockerCli, &cobra.Command{})\n\tif err != nil {\n\t\tif errdefs.IsNotFound(err) {\n\t\t\tlogrus.Warnf(\"Docker Compose requires buildx plugin to be installed\")\n\t\t\treturn false, nil\n\t\t}\n\t\treturn false, err\n\t}\n\treturn true, err\n}\n\n// We _could_ use bake.* types from github.com/docker/buildx but long term plan is to remove buildx as a dependency\ntype bakeConfig struct {\n\tGroups  map[string]bakeGroup  `json:\"group\"`\n\tTargets map[string]bakeTarget `json:\"target\"`\n}\n\ntype bakeGroup struct {\n\tTargets []string `json:\"targets\"`\n}\n\ntype bakeTarget struct {\n\tContext          string            `json:\"context,omitempty\"`\n\tContexts         map[string]string `json:\"contexts,omitempty\"`\n\tDockerfile       string            `json:\"dockerfile,omitempty\"`\n\tDockerfileInline string            `json:\"dockerfile-inline,omitempty\"`\n\tArgs             map[string]string `json:\"args,omitempty\"`\n\tLabels           map[string]string `json:\"labels,omitempty\"`\n\tTags             []string          `json:\"tags,omitempty\"`\n\tCacheFrom        []string          `json:\"cache-from,omitempty\"`\n\tCacheTo          []string          `json:\"cache-to,omitempty\"`\n\tTarget           string            `json:\"target,omitempty\"`\n\tSecrets          []string          `json:\"secret,omitempty\"`\n\tSSH              []string          `json:\"ssh,omitempty\"`\n\tPlatforms        []string          `json:\"platforms,omitempty\"`\n\tPull             bool              `json:\"pull,omitempty\"`\n\tNoCache          bool              `json:\"no-cache,omitempty\"`\n\tNetworkMode      string            `json:\"network,omitempty\"`\n\tNoCacheFilter    []string          `json:\"no-cache-filter,omitempty\"`\n\tShmSize          types.UnitBytes   `json:\"shm-size,omitempty\"`\n\tUlimits          []string          `json:\"ulimits,omitempty\"`\n\tCall             string            `json:\"call,omitempty\"`\n\tEntitlements     []string          `json:\"entitlements,omitempty\"`\n\tExtraHosts       map[string]string `json:\"extra-hosts,omitempty\"`\n\tOutputs          []string          `json:\"output,omitempty\"`\n\tAttest           []string          `json:\"attest,omitempty\"`\n}\n\ntype bakeMetadata map[string]buildStatus\n\ntype buildStatus struct {\n\tDigest string `json:\"containerimage.digest\"`\n\tImage  string `json:\"image.name\"`\n}\n\nfunc (s *composeService) doBuildBake(ctx context.Context, project *types.Project, serviceToBeBuild types.Services, options api.BuildOptions) (map[string]string, error) { //nolint:gocyclo\n\teg := errgroup.Group{}\n\tch := make(chan *client.SolveStatus)\n\tdisplayMode := progressui.DisplayMode(options.Progress)\n\tif p, ok := os.LookupEnv(\"BUILDKIT_PROGRESS\"); ok && displayMode == progressui.AutoMode {\n\t\tdisplayMode = progressui.DisplayMode(p)\n\t}\n\tout := options.Out\n\tif out == nil {\n\t\tout = s.stdout()\n\t}\n\tdisplay, err := progressui.NewDisplay(makeConsole(out), displayMode)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\teg.Go(func() error {\n\t\t_, err := display.UpdateFrom(ctx, ch)\n\t\treturn err\n\t})\n\n\tcfg := bakeConfig{\n\t\tGroups:  map[string]bakeGroup{},\n\t\tTargets: map[string]bakeTarget{},\n\t}\n\tvar (\n\t\tgroup          bakeGroup\n\t\tprivileged     bool\n\t\tread           []string\n\t\texpectedImages = make(map[string]string, len(serviceToBeBuild)) // service name -> expected image\n\t\ttargets        = make(map[string]string, len(serviceToBeBuild)) // service name -> build target\n\t)\n\n\t// produce a unique ID for service used as bake target\n\tfor serviceName := range project.Services {\n\t\tt := strings.ReplaceAll(serviceName, \".\", \"_\")\n\t\tfor {\n\t\t\tif _, ok := targets[serviceName]; !ok {\n\t\t\t\ttargets[serviceName] = t\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tt += \"_\"\n\t\t}\n\t}\n\n\tvar secretsEnv []string\n\tfor serviceName, service := range project.Services {\n\t\tif service.Build == nil {\n\t\t\tcontinue\n\t\t}\n\t\tbuildConfig := *service.Build\n\t\tlabels := getImageBuildLabels(project, service)\n\n\t\targs := resolveAndMergeBuildArgs(s.getProxyConfig(), project, service, options).ToMapping()\n\t\tfor k, v := range args {\n\t\t\targs[k] = strings.ReplaceAll(v, \"${\", \"$${\")\n\t\t}\n\n\t\tentitlements := buildConfig.Entitlements\n\t\tif slices.Contains(buildConfig.Entitlements, \"security.insecure\") {\n\t\t\tprivileged = true\n\t\t}\n\t\tif buildConfig.Privileged {\n\t\t\tentitlements = append(entitlements, \"security.insecure\")\n\t\t\tprivileged = true\n\t\t}\n\n\t\tvar outputs []string\n\t\tvar call string\n\t\tpush := options.Push && service.Image != \"\"\n\t\tswitch {\n\t\tcase options.Check:\n\t\t\tcall = \"lint\"\n\t\tcase len(service.Build.Platforms) > 1:\n\t\t\toutputs = []string{fmt.Sprintf(\"type=image,push=%t\", push)}\n\t\tdefault:\n\t\t\tif push {\n\t\t\t\toutputs = []string{\"type=registry\"}\n\t\t\t} else {\n\t\t\t\toutputs = []string{\"type=docker\"}\n\t\t\t}\n\t\t}\n\n\t\tread = append(read, buildConfig.Context)\n\t\tfor _, path := range buildConfig.AdditionalContexts {\n\t\t\t_, _, err := gitutil.ParseGitRef(path)\n\t\t\tif !strings.Contains(path, \"://\") && err != nil {\n\t\t\t\tread = append(read, path)\n\t\t\t}\n\t\t}\n\n\t\timage := api.GetImageNameOrDefault(service, project.Name)\n\t\ts.events.On(buildingEvent(image))\n\n\t\texpectedImages[serviceName] = image\n\n\t\tpull := service.Build.Pull || options.Pull\n\t\tnoCache := service.Build.NoCache || options.NoCache\n\n\t\ttarget := targets[serviceName]\n\n\t\tsecrets, env := toBakeSecrets(project, buildConfig.Secrets)\n\t\tsecretsEnv = append(secretsEnv, env...)\n\n\t\tcfg.Targets[target] = bakeTarget{\n\t\t\tContext:          buildConfig.Context,\n\t\t\tContexts:         additionalContexts(buildConfig.AdditionalContexts, targets),\n\t\t\tDockerfile:       dockerFilePath(buildConfig.Context, buildConfig.Dockerfile),\n\t\t\tDockerfileInline: strings.ReplaceAll(buildConfig.DockerfileInline, \"${\", \"$${\"),\n\t\t\tArgs:             args,\n\t\t\tLabels:           labels,\n\t\t\tTags:             append(buildConfig.Tags, image),\n\n\t\t\tCacheFrom:     buildConfig.CacheFrom,\n\t\t\tCacheTo:       buildConfig.CacheTo,\n\t\t\tNetworkMode:   buildConfig.Network,\n\t\t\tNoCacheFilter: buildConfig.NoCacheFilter,\n\t\t\tPlatforms:     buildConfig.Platforms,\n\t\t\tTarget:        buildConfig.Target,\n\t\t\tSecrets:       secrets,\n\t\t\tSSH:           toBakeSSH(append(buildConfig.SSH, options.SSHs...)),\n\t\t\tPull:          pull,\n\t\t\tNoCache:       noCache,\n\t\t\tShmSize:       buildConfig.ShmSize,\n\t\t\tUlimits:       toBakeUlimits(buildConfig.Ulimits),\n\t\t\tEntitlements:  entitlements,\n\t\t\tExtraHosts:    toBakeExtraHosts(buildConfig.ExtraHosts),\n\n\t\t\tOutputs: outputs,\n\t\t\tCall:    call,\n\t\t\tAttest:  toBakeAttest(buildConfig),\n\t\t}\n\t}\n\n\t// create a bake group with targets for services to build\n\tfor serviceName, service := range serviceToBeBuild {\n\t\tif service.Build == nil {\n\t\t\tcontinue\n\t\t}\n\t\tgroup.Targets = append(group.Targets, targets[serviceName])\n\t}\n\n\tcfg.Groups[\"default\"] = group\n\n\tb, err := json.MarshalIndent(cfg, \"\", \"  \")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif options.Print {\n\t\t_, err = fmt.Fprintln(s.stdout(), string(b))\n\t\treturn nil, err\n\t}\n\tlogrus.Debugf(\"bake build config:\\n%s\", string(b))\n\n\ttmpdir := os.TempDir()\n\tvar metadataFile string\n\tfor {\n\t\t// we don't use os.CreateTemp here as we need a temporary file name, but don't want it actually created\n\t\t// as bake relies on atomicwriter and this creates conflict during rename\n\t\tmetadataFile = filepath.Join(tmpdir, fmt.Sprintf(\"compose-build-metadataFile-%s.json\", uuid.New().String()))\n\t\tif _, err = os.Stat(metadataFile); err != nil {\n\t\t\tif os.IsNotExist(err) {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tvar pathError *fs.PathError\n\t\t\tif errors.As(err, &pathError) {\n\t\t\t\treturn nil, fmt.Errorf(\"can't access os.tempDir %s: %w\", tmpdir, pathError.Err)\n\t\t\t}\n\t\t}\n\t}\n\tdefer func() {\n\t\t_ = os.Remove(metadataFile)\n\t}()\n\n\tbuildx, err := s.getBuildxPlugin()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\targs := []string{\"bake\", \"--file\", \"-\", \"--progress\", \"rawjson\", \"--metadata-file\", metadataFile}\n\t// FIXME we should prompt user about this, but this is a breaking change in UX\n\tfor _, path := range read {\n\t\targs = append(args, \"--allow\", \"fs.read=\"+path)\n\t}\n\tif privileged {\n\t\targs = append(args, \"--allow\", \"security.insecure\")\n\t}\n\tif options.SBOM != \"\" {\n\t\targs = append(args, \"--sbom=\"+options.SBOM)\n\t}\n\tif options.Provenance != \"\" {\n\t\targs = append(args, \"--provenance=\"+options.Provenance)\n\t}\n\n\tif options.Builder != \"\" {\n\t\targs = append(args, \"--builder\", options.Builder)\n\t}\n\tif options.Quiet {\n\t\targs = append(args, \"--progress=quiet\")\n\t}\n\n\tlogrus.Debugf(\"Executing bake with args: %v\", args)\n\n\tif s.dryRun {\n\t\treturn s.dryRunBake(cfg), nil\n\t}\n\tcmd := exec.CommandContext(ctx, buildx.Path, args...)\n\n\terr = s.prepareShellOut(ctx, types.NewMapping(os.Environ()), cmd)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tendpoint, cleanup, err := s.propagateDockerEndpoint()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcmd.Env = append(cmd.Env, endpoint...)\n\tcmd.Env = append(cmd.Env, secretsEnv...)\n\tdefer cleanup()\n\n\tcmd.Stdout = s.stdout()\n\tcmd.Stdin = bytes.NewBuffer(b)\n\tpipe, err := cmd.StderrPipe()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar errMessage []string\n\treader := bufio.NewReader(pipe)\n\n\terr = cmd.Start()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\teg.Go(cmd.Wait)\n\tfor {\n\t\tline, readErr := reader.ReadString('\\n')\n\t\tif readErr != nil {\n\t\t\tif readErr == io.EOF {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tif errors.Is(readErr, os.ErrClosed) {\n\t\t\t\tlogrus.Debugf(\"bake stopped\")\n\t\t\t\tbreak\n\t\t\t}\n\t\t\treturn nil, fmt.Errorf(\"failed to execute bake: %w\", readErr)\n\t\t}\n\t\tdecoder := json.NewDecoder(strings.NewReader(line))\n\t\tvar status client.SolveStatus\n\t\terr := decoder.Decode(&status)\n\t\tif err != nil {\n\t\t\tif strings.HasPrefix(line, \"ERROR: \") {\n\t\t\t\terrMessage = append(errMessage, line[7:])\n\t\t\t} else {\n\t\t\t\terrMessage = append(errMessage, line)\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tch <- &status\n\t}\n\tclose(ch) // stop build progress UI\n\n\terr = eg.Wait()\n\tif err != nil {\n\t\tif len(errMessage) > 0 {\n\t\t\treturn nil, errors.New(strings.Join(errMessage, \"\\n\"))\n\t\t}\n\t\treturn nil, fmt.Errorf(\"failed to execute bake: %w\", err)\n\t}\n\n\tb, err = os.ReadFile(metadataFile)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar md bakeMetadata\n\terr = json.Unmarshal(b, &md)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tresults := map[string]string{}\n\tfor name := range serviceToBeBuild {\n\t\timage := expectedImages[name]\n\t\ttarget := targets[name]\n\t\tbuilt, ok := md[target]\n\t\tif !ok {\n\t\t\treturn nil, fmt.Errorf(\"build result not found in Bake metadata for service %s\", name)\n\t\t}\n\t\tresults[image] = built.Digest\n\t\ts.events.On(builtEvent(image))\n\t}\n\treturn results, nil\n}\n\nfunc (s *composeService) getBuildxPlugin() (*manager.Plugin, error) {\n\tbuildx, err := manager.GetPlugin(\"buildx\", s.dockerCli, &cobra.Command{})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif buildx.Err != nil {\n\t\treturn nil, buildx.Err\n\t}\n\n\tif buildx.Version == \"\" {\n\t\treturn nil, fmt.Errorf(\"failed to get version of buildx\")\n\t}\n\n\tif versions.LessThan(buildx.Version[1:], buildxMinVersion) {\n\t\treturn nil, fmt.Errorf(\"compose build requires buildx %s or later\", buildxMinVersion)\n\t}\n\n\treturn buildx, nil\n}\n\n// makeConsole wraps the provided writer to match [containerd.File] interface if it is of type *streams.Out.\n// buildkit's NewDisplay doesn't actually require a [io.Reader], it only uses the [containerd.Console] type to\n// benefits from ANSI capabilities, but only does writes.\nfunc makeConsole(out io.Writer) io.Writer {\n\tif s, ok := out.(*streams.Out); ok {\n\t\treturn &_console{s}\n\t}\n\treturn out\n}\n\nvar _ console.File = &_console{}\n\ntype _console struct {\n\t*streams.Out\n}\n\nfunc (c _console) Read([]byte) (n int, err error) {\n\treturn 0, errors.New(\"not implemented\")\n}\n\nfunc (c _console) Close() error {\n\treturn nil\n}\n\nfunc (c _console) Fd() uintptr {\n\treturn c.FD()\n}\n\nfunc (c _console) Name() string {\n\treturn \"compose\"\n}\n\nfunc toBakeExtraHosts(hosts types.HostsList) map[string]string {\n\tm := make(map[string]string)\n\tfor k, v := range hosts {\n\t\tm[k] = strings.Join(v, \",\")\n\t}\n\treturn m\n}\n\nfunc additionalContexts(contexts types.Mapping, targets map[string]string) map[string]string {\n\tac := map[string]string{}\n\tfor k, v := range contexts {\n\t\tif target, found := strings.CutPrefix(v, types.ServicePrefix); found {\n\t\t\tv = \"target:\" + targets[target]\n\t\t}\n\t\tac[k] = v\n\t}\n\treturn ac\n}\n\nfunc toBakeUlimits(ulimits map[string]*types.UlimitsConfig) []string {\n\ts := []string{}\n\tfor u, l := range ulimits {\n\t\tif l.Single > 0 {\n\t\t\ts = append(s, fmt.Sprintf(\"%s=%d\", u, l.Single))\n\t\t} else {\n\t\t\ts = append(s, fmt.Sprintf(\"%s=%d:%d\", u, l.Soft, l.Hard))\n\t\t}\n\t}\n\treturn s\n}\n\nfunc toBakeSSH(ssh types.SSHConfig) []string {\n\tvar s []string\n\tfor _, key := range ssh {\n\t\ts = append(s, fmt.Sprintf(\"%s=%s\", key.ID, key.Path))\n\t}\n\treturn s\n}\n\nfunc toBakeSecrets(project *types.Project, secrets []types.ServiceSecretConfig) ([]string, []string) {\n\tvar s []string\n\tvar env []string\n\tfor _, ref := range secrets {\n\t\tdef := project.Secrets[ref.Source]\n\t\ttarget := ref.Target\n\t\tif target == \"\" {\n\t\t\ttarget = ref.Source\n\t\t}\n\t\tswitch {\n\t\tcase def.Environment != \"\":\n\t\t\tenv = append(env, fmt.Sprintf(\"%s=%s\", def.Environment, project.Environment[def.Environment]))\n\t\t\ts = append(s, fmt.Sprintf(\"id=%s,type=env,env=%s\", target, def.Environment))\n\t\tcase def.File != \"\":\n\t\t\ts = append(s, fmt.Sprintf(\"id=%s,type=file,src=%s\", target, def.File))\n\t\t}\n\t}\n\treturn s, env\n}\n\nfunc toBakeAttest(buildConfig types.BuildConfig) []string {\n\tvar attests []string\n\n\t// Handle per-service provenance configuration (only from build config, not global options)\n\tif buildConfig.Provenance != \"\" {\n\t\tif buildConfig.Provenance == \"true\" {\n\t\t\tattests = append(attests, \"type=provenance\")\n\t\t} else if buildConfig.Provenance != \"false\" {\n\t\t\tattests = append(attests, fmt.Sprintf(\"type=provenance,%s\", buildConfig.Provenance))\n\t\t}\n\t}\n\n\t// Handle per-service SBOM configuration (only from build config, not global options)\n\tif buildConfig.SBOM != \"\" {\n\t\tif buildConfig.SBOM == \"true\" {\n\t\t\tattests = append(attests, \"type=sbom\")\n\t\t} else if buildConfig.SBOM != \"false\" {\n\t\t\tattests = append(attests, fmt.Sprintf(\"type=sbom,%s\", buildConfig.SBOM))\n\t\t}\n\t}\n\n\treturn attests\n}\n\nfunc dockerFilePath(ctxName string, dockerfile string) string {\n\tif dockerfile == \"\" {\n\t\treturn \"\"\n\t}\n\tif contextType, _ := build.DetectContextType(ctxName); contextType == build.ContextTypeGit {\n\t\treturn dockerfile\n\t}\n\tif !filepath.IsAbs(dockerfile) {\n\t\tdockerfile = filepath.Join(ctxName, dockerfile)\n\t}\n\tdir := filepath.Dir(dockerfile)\n\tsymlinks, err := filepath.EvalSymlinks(dir)\n\tif err == nil {\n\t\treturn filepath.Join(symlinks, filepath.Base(dockerfile))\n\t}\n\treturn dockerfile\n}\n\nfunc (s composeService) dryRunBake(cfg bakeConfig) map[string]string {\n\tbakeResponse := map[string]string{}\n\tfor name, target := range cfg.Targets {\n\t\tdryRunUUID := fmt.Sprintf(\"dryRun-%x\", sha1.Sum([]byte(name)))\n\t\ts.displayDryRunBuildEvent(name, dryRunUUID, target.Tags[0])\n\t\tbakeResponse[name] = dryRunUUID\n\t}\n\tfor name := range bakeResponse {\n\t\ts.events.On(builtEvent(name))\n\t}\n\treturn bakeResponse\n}\n\nfunc (s composeService) displayDryRunBuildEvent(name, dryRunUUID, tag string) {\n\ts.events.On(api.Resource{\n\t\tID:     name + \" ==>\",\n\t\tStatus: api.Done,\n\t\tText:   fmt.Sprintf(\"==> writing image %s\", dryRunUUID),\n\t})\n\ts.events.On(api.Resource{\n\t\tID:     name + \" ==> ==>\",\n\t\tStatus: api.Done,\n\t\tText:   fmt.Sprintf(`naming to %s`, tag),\n\t})\n}\n"
  },
  {
    "path": "pkg/compose/build_classic.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli\"\n\t\"github.com/docker/cli/cli/command/image/build\"\n\t\"github.com/moby/go-archive\"\n\tbuildtypes \"github.com/moby/moby/api/types/build\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/jsonstream\"\n\t\"github.com/moby/moby/api/types/registry\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/moby/moby/client/pkg/jsonmessage\"\n\t\"github.com/moby/moby/client/pkg/progress\"\n\t\"github.com/moby/moby/client/pkg/streamformatter\"\n\t\"github.com/sirupsen/logrus\"\n\t\"go.opentelemetry.io/otel/attribute\"\n\t\"go.opentelemetry.io/otel/trace\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) doBuildClassic(ctx context.Context, project *types.Project, serviceToBuild types.Services, options api.BuildOptions) (map[string]string, error) {\n\timageIDs := map[string]string{}\n\n\t// Not using bake, additional_context: service:xx is implemented by building images in dependency order\n\tproject, err := project.WithServicesTransform(func(serviceName string, service types.ServiceConfig) (types.ServiceConfig, error) {\n\t\tif service.Build != nil {\n\t\t\tfor _, c := range service.Build.AdditionalContexts {\n\t\t\t\tif t, found := strings.CutPrefix(c, types.ServicePrefix); found {\n\t\t\t\t\tif service.DependsOn == nil {\n\t\t\t\t\t\tservice.DependsOn = map[string]types.ServiceDependency{}\n\t\t\t\t\t}\n\t\t\t\t\tservice.DependsOn[t] = types.ServiceDependency{\n\t\t\t\t\t\tCondition: \"build\", // non-canonical, but will force dependency graph ordering\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn service, nil\n\t})\n\tif err != nil {\n\t\treturn imageIDs, err\n\t}\n\n\t// we use a pre-allocated []string to collect build digest by service index while running concurrent goroutines\n\tbuiltDigests := make([]string, len(project.Services))\n\tnames := project.ServiceNames()\n\tgetServiceIndex := func(name string) int {\n\t\tfor idx, n := range names {\n\t\t\tif n == name {\n\t\t\t\treturn idx\n\t\t\t}\n\t\t}\n\t\treturn -1\n\t}\n\n\terr = InDependencyOrder(ctx, project, func(ctx context.Context, name string) error {\n\t\ttrace.SpanFromContext(ctx).SetAttributes(attribute.String(\"builder\", \"classic\"))\n\t\tservice, ok := serviceToBuild[name]\n\t\tif !ok {\n\t\t\treturn nil\n\t\t}\n\n\t\timage := api.GetImageNameOrDefault(service, project.Name)\n\t\ts.events.On(buildingEvent(image))\n\t\tid, err := s.doBuildImage(ctx, project, service, options)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\ts.events.On(builtEvent(image))\n\t\tbuiltDigests[getServiceIndex(name)] = id\n\n\t\tif options.Push {\n\t\t\treturn s.push(ctx, project, api.PushOptions{})\n\t\t}\n\t\treturn nil\n\t}, func(traversal *graphTraversal) {\n\t\ttraversal.maxConcurrency = s.maxConcurrency\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tfor i, imageDigest := range builtDigests {\n\t\tif imageDigest != \"\" {\n\t\t\tservice := project.Services[names[i]]\n\t\t\timageRef := api.GetImageNameOrDefault(service, project.Name)\n\t\t\timageIDs[imageRef] = imageDigest\n\t\t}\n\t}\n\treturn imageIDs, err\n}\n\n//nolint:gocyclo\nfunc (s *composeService) doBuildImage(ctx context.Context, project *types.Project, service types.ServiceConfig, options api.BuildOptions) (string, error) {\n\tvar (\n\t\tbuildCtx      io.ReadCloser\n\t\tdockerfileCtx io.ReadCloser\n\t\tcontextDir    string\n\t\trelDockerfile string\n\t)\n\n\tif len(service.Build.Platforms) > 1 {\n\t\treturn \"\", fmt.Errorf(\"the classic builder doesn't support multi-arch build, set DOCKER_BUILDKIT=1 to use BuildKit\")\n\t}\n\tif service.Build.Privileged {\n\t\treturn \"\", fmt.Errorf(\"the classic builder doesn't support privileged mode, set DOCKER_BUILDKIT=1 to use BuildKit\")\n\t}\n\tif len(service.Build.AdditionalContexts) > 0 {\n\t\treturn \"\", fmt.Errorf(\"the classic builder doesn't support additional contexts, set DOCKER_BUILDKIT=1 to use BuildKit\")\n\t}\n\tif len(service.Build.SSH) > 0 {\n\t\treturn \"\", fmt.Errorf(\"the classic builder doesn't support SSH keys, set DOCKER_BUILDKIT=1 to use BuildKit\")\n\t}\n\tif len(service.Build.Secrets) > 0 {\n\t\treturn \"\", fmt.Errorf(\"the classic builder doesn't support secrets, set DOCKER_BUILDKIT=1 to use BuildKit\")\n\t}\n\n\tif service.Build.Labels == nil {\n\t\tservice.Build.Labels = make(map[string]string)\n\t}\n\tservice.Build.Labels[api.ImageBuilderLabel] = \"classic\"\n\n\tdockerfileName := dockerFilePath(service.Build.Context, service.Build.Dockerfile)\n\tspecifiedContext := service.Build.Context\n\tprogBuff := s.stdout()\n\tbuildBuff := s.stdout()\n\n\tcontextType, err := build.DetectContextType(specifiedContext)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tswitch contextType {\n\tcase build.ContextTypeStdin:\n\t\treturn \"\", fmt.Errorf(\"building from STDIN is not supported\")\n\tcase build.ContextTypeLocal:\n\t\tcontextDir, relDockerfile, err = build.GetContextFromLocalDir(specifiedContext, dockerfileName)\n\t\tif err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"unable to prepare context: %w\", err)\n\t\t}\n\t\tif strings.HasPrefix(relDockerfile, \"..\"+string(filepath.Separator)) {\n\t\t\t// Dockerfile is outside build-context; read the Dockerfile and pass it as dockerfileCtx\n\t\t\tdockerfileCtx, err = os.Open(dockerfileName)\n\t\t\tif err != nil {\n\t\t\t\treturn \"\", fmt.Errorf(\"unable to open Dockerfile: %w\", err)\n\t\t\t}\n\t\t\tdefer dockerfileCtx.Close() //nolint:errcheck\n\t\t}\n\tcase build.ContextTypeGit:\n\t\tvar tempDir string\n\t\ttempDir, relDockerfile, err = build.GetContextFromGitURL(specifiedContext, dockerfileName)\n\t\tif err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"unable to prepare context: %w\", err)\n\t\t}\n\t\tdefer func() {\n\t\t\t_ = os.RemoveAll(tempDir)\n\t\t}()\n\t\tcontextDir = tempDir\n\tcase build.ContextTypeRemote:\n\t\tbuildCtx, relDockerfile, err = build.GetContextFromURL(progBuff, specifiedContext, dockerfileName)\n\t\tif err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"unable to prepare context: %w\", err)\n\t\t}\n\tdefault:\n\t\treturn \"\", fmt.Errorf(\"unable to prepare context: path %q not found\", specifiedContext)\n\t}\n\n\t// read from a directory into tar archive\n\tif buildCtx == nil {\n\t\texcludes, err := build.ReadDockerignore(contextDir)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\n\t\tif err := build.ValidateContextDirectory(contextDir, excludes); err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"checking context: %w\", err)\n\t\t}\n\n\t\t// And canonicalize dockerfile name to a platform-independent one\n\t\trelDockerfile = filepath.ToSlash(relDockerfile)\n\n\t\texcludes = build.TrimBuildFilesFromExcludes(excludes, relDockerfile, false)\n\t\tbuildCtx, err = archive.TarWithOptions(contextDir, &archive.TarOptions{\n\t\t\tExcludePatterns: excludes,\n\t\t\tChownOpts:       &archive.ChownOpts{UID: 0, GID: 0},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t}\n\n\t// replace Dockerfile if it was added from stdin or a file outside the build-context, and there is archive context\n\tif dockerfileCtx != nil && buildCtx != nil {\n\t\tbuildCtx, relDockerfile, err = build.AddDockerfileToBuildContext(dockerfileCtx, buildCtx)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t}\n\n\tbuildCtx, err = build.Compress(buildCtx)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\t// Setup an upload progress bar\n\tprogressOutput := streamformatter.NewProgressOutput(progBuff)\n\tbody := progress.NewProgressReader(buildCtx, progressOutput, 0, \"\", \"Sending build context to Docker daemon\")\n\n\tconfigFile := s.configFile()\n\tcreds, err := configFile.GetAllCredentials()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tauthConfigs := make(map[string]registry.AuthConfig, len(creds))\n\tfor k, authConfig := range creds {\n\t\tauthConfigs[k] = registry.AuthConfig{\n\t\t\tUsername:      authConfig.Username,\n\t\t\tPassword:      authConfig.Password,\n\t\t\tServerAddress: authConfig.ServerAddress,\n\n\t\t\t// TODO(thaJeztah): Are these expected to be included? See https://github.com/docker/cli/pull/6516#discussion_r2387586472\n\t\t\tAuth:          authConfig.Auth,\n\t\t\tIdentityToken: authConfig.IdentityToken,\n\t\t\tRegistryToken: authConfig.RegistryToken,\n\t\t}\n\t}\n\tbuildOpts := imageBuildOptions(s.getProxyConfig(), project, service, options)\n\timageName := api.GetImageNameOrDefault(service, project.Name)\n\tbuildOpts.Tags = append(buildOpts.Tags, imageName)\n\tbuildOpts.Dockerfile = relDockerfile\n\tbuildOpts.AuthConfigs = authConfigs\n\tbuildOpts.Memory = options.Memory\n\n\tctx, cancel := context.WithCancel(ctx)\n\tdefer cancel()\n\ts.events.On(buildingEvent(imageName))\n\tresponse, err := s.apiClient().ImageBuild(ctx, body, buildOpts)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tdefer response.Body.Close() //nolint:errcheck\n\n\timageID := \"\"\n\taux := func(msg jsonstream.Message) {\n\t\tvar result buildtypes.Result\n\t\tif err := json.Unmarshal(*msg.Aux, &result); err != nil {\n\t\t\tlogrus.Errorf(\"Failed to parse aux message: %s\", err)\n\t\t} else {\n\t\t\timageID = result.ID\n\t\t}\n\t}\n\n\terr = jsonmessage.DisplayJSONMessagesStream(response.Body, buildBuff, progBuff.FD(), true, aux)\n\tif err != nil {\n\t\tvar jerr *jsonstream.Error\n\t\tif errors.As(err, &jerr) {\n\t\t\t// If no error code is set, default to 1\n\t\t\tif jerr.Code == 0 {\n\t\t\t\tjerr.Code = 1\n\t\t\t}\n\t\t\treturn \"\", cli.StatusError{Status: jerr.Message, StatusCode: jerr.Code}\n\t\t}\n\t\treturn \"\", err\n\t}\n\ts.events.On(builtEvent(imageName))\n\treturn imageID, nil\n}\n\nfunc imageBuildOptions(proxyConfigs map[string]string, project *types.Project, service types.ServiceConfig, options api.BuildOptions) client.ImageBuildOptions {\n\tconfig := service.Build\n\treturn client.ImageBuildOptions{\n\t\tVersion:     buildtypes.BuilderV1,\n\t\tTags:        config.Tags,\n\t\tNoCache:     config.NoCache,\n\t\tRemove:      true,\n\t\tPullParent:  config.Pull,\n\t\tBuildArgs:   resolveAndMergeBuildArgs(proxyConfigs, project, service, options),\n\t\tLabels:      config.Labels,\n\t\tNetworkMode: config.Network,\n\t\tExtraHosts:  config.ExtraHosts.AsList(\":\"),\n\t\tTarget:      config.Target,\n\t\tIsolation:   container.Isolation(config.Isolation),\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/build_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"slices\"\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc Test_addBuildDependencies(t *testing.T) {\n\tproject := &types.Project{Services: types.Services{\n\t\t\"test\": types.ServiceConfig{\n\t\t\tBuild: &types.BuildConfig{\n\t\t\t\tAdditionalContexts: map[string]string{\n\t\t\t\t\t\"foo\": \"service:foo\",\n\t\t\t\t\t\"bar\": \"service:bar\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t\"foo\": types.ServiceConfig{\n\t\t\tBuild: &types.BuildConfig{\n\t\t\t\tAdditionalContexts: map[string]string{\n\t\t\t\t\t\"zot\": \"service:zot\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t\"bar\": types.ServiceConfig{\n\t\t\tBuild: &types.BuildConfig{},\n\t\t},\n\t\t\"zot\": types.ServiceConfig{\n\t\t\tBuild: &types.BuildConfig{},\n\t\t},\n\t}}\n\n\tservices := addBuildDependencies([]string{\"test\"}, project)\n\texpected := []string{\"test\", \"foo\", \"bar\", \"zot\"}\n\tslices.Sort(services)\n\tslices.Sort(expected)\n\tassert.DeepEqual(t, services, expected)\n}\n"
  },
  {
    "path": "pkg/compose/commit.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/moby/moby/client\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Commit(ctx context.Context, projectName string, options api.CommitOptions) error {\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.commit(ctx, projectName, options)\n\t}, \"commit\", s.events)\n}\n\nfunc (s *composeService) commit(ctx context.Context, projectName string, options api.CommitOptions) error {\n\tprojectName = strings.ToLower(projectName)\n\n\tctr, err := s.getSpecifiedContainer(ctx, projectName, oneOffInclude, false, options.Service, options.Index)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tname := getCanonicalContainerName(ctr)\n\n\ts.events.On(api.Resource{\n\t\tID:     name,\n\t\tStatus: api.Working,\n\t\tText:   api.StatusCommitting,\n\t})\n\n\tif s.dryRun {\n\t\ts.events.On(api.Resource{\n\t\t\tID:     name,\n\t\t\tStatus: api.Done,\n\t\t\tText:   api.StatusCommitted,\n\t\t})\n\n\t\treturn nil\n\t}\n\n\tresponse, err := s.apiClient().ContainerCommit(ctx, ctr.ID, client.ContainerCommitOptions{\n\t\tReference: options.Reference,\n\t\tComment:   options.Comment,\n\t\tAuthor:    options.Author,\n\t\tChanges:   options.Changes.GetSlice(),\n\t\tNoPause:   !options.Pause,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\ts.events.On(api.Resource{\n\t\tID:     name,\n\t\tText:   fmt.Sprintf(\"Committed as %s\", response.ID),\n\t\tStatus: api.Done,\n\t})\n\n\treturn nil\n}\n"
  },
  {
    "path": "pkg/compose/compose.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/buildx/store/storeutil\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/docker/cli/cli/config/configfile\"\n\t\"github.com/docker/cli/cli/flags\"\n\t\"github.com/docker/cli/cli/streams\"\n\t\"github.com/jonboulle/clockwork\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/swarm\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/sirupsen/logrus\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/dryrun\"\n)\n\ntype Option func(service *composeService) error\n\n// NewComposeService creates a Compose service using Docker CLI.\n// This is the standard constructor that requires command.Cli for full functionality.\n//\n// Example usage:\n//\n//\tdockerCli, _ := command.NewDockerCli()\n//\tservice := NewComposeService(dockerCli)\n//\n// For advanced configuration with custom overrides, use ServiceOption functions:\n//\n//\tservice := NewComposeService(dockerCli,\n//\t    WithPrompt(prompt.NewPrompt(cli.In(), cli.Out()).Confirm),\n//\t    WithOutputStream(customOut),\n//\t    WithErrorStream(customErr),\n//\t    WithInputStream(customIn))\n//\n// Or set all streams at once:\n//\n//\tservice := NewComposeService(dockerCli,\n//\t    WithStreams(customOut, customErr, customIn))\nfunc NewComposeService(dockerCli command.Cli, options ...Option) (api.Compose, error) {\n\ts := &composeService{\n\t\tdockerCli:      dockerCli,\n\t\tclock:          clockwork.NewRealClock(),\n\t\tmaxConcurrency: -1,\n\t\tdryRun:         false,\n\t}\n\tfor _, option := range options {\n\t\tif err := option(s); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\tif s.prompt == nil {\n\t\ts.prompt = func(message string, defaultValue bool) (bool, error) {\n\t\t\tfmt.Println(message)\n\t\t\tlogrus.Warning(\"Compose is running without a 'prompt' component to interact with user\")\n\t\t\treturn defaultValue, nil\n\t\t}\n\t}\n\tif s.events == nil {\n\t\ts.events = &ignore{}\n\t}\n\n\t// If custom streams were provided, wrap the Docker CLI to use them\n\tif s.outStream != nil || s.errStream != nil || s.inStream != nil {\n\t\ts.dockerCli = s.wrapDockerCliWithStreams(dockerCli)\n\t}\n\n\treturn s, nil\n}\n\n// WithStreams sets custom I/O streams for output and interaction\nfunc WithStreams(out, err io.Writer, in io.Reader) Option {\n\treturn func(s *composeService) error {\n\t\ts.outStream = out\n\t\ts.errStream = err\n\t\ts.inStream = in\n\t\treturn nil\n\t}\n}\n\n// WithOutputStream sets a custom output stream\nfunc WithOutputStream(out io.Writer) Option {\n\treturn func(s *composeService) error {\n\t\ts.outStream = out\n\t\treturn nil\n\t}\n}\n\n// WithErrorStream sets a custom error stream\nfunc WithErrorStream(err io.Writer) Option {\n\treturn func(s *composeService) error {\n\t\ts.errStream = err\n\t\treturn nil\n\t}\n}\n\n// WithInputStream sets a custom input stream\nfunc WithInputStream(in io.Reader) Option {\n\treturn func(s *composeService) error {\n\t\ts.inStream = in\n\t\treturn nil\n\t}\n}\n\n// WithContextInfo sets custom Docker context information\nfunc WithContextInfo(info api.ContextInfo) Option {\n\treturn func(s *composeService) error {\n\t\ts.contextInfo = info\n\t\treturn nil\n\t}\n}\n\n// WithProxyConfig sets custom HTTP proxy configuration for builds\nfunc WithProxyConfig(config map[string]string) Option {\n\treturn func(s *composeService) error {\n\t\ts.proxyConfig = config\n\t\treturn nil\n\t}\n}\n\n// WithPrompt configure a UI component for Compose service to interact with user and confirm actions\nfunc WithPrompt(prompt Prompt) Option {\n\treturn func(s *composeService) error {\n\t\ts.prompt = prompt\n\t\treturn nil\n\t}\n}\n\n// WithMaxConcurrency defines upper limit for concurrent operations against engine API\nfunc WithMaxConcurrency(maxConcurrency int) Option {\n\treturn func(s *composeService) error {\n\t\ts.maxConcurrency = maxConcurrency\n\t\treturn nil\n\t}\n}\n\n// WithDryRun configure Compose to run without actually applying changes\nfunc WithDryRun(s *composeService) error {\n\ts.dryRun = true\n\tcli, err := command.NewDockerCli()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\toptions := flags.NewClientOptions()\n\toptions.Context = s.dockerCli.CurrentContext()\n\terr = cli.Initialize(options, command.WithInitializeClient(func(cli *command.DockerCli) (client.APIClient, error) {\n\t\treturn dryrun.NewDryRunClient(s.apiClient(), s.dockerCli)\n\t}))\n\tif err != nil {\n\t\treturn err\n\t}\n\ts.dockerCli = cli\n\treturn nil\n}\n\ntype Prompt func(message string, defaultValue bool) (bool, error)\n\n// AlwaysOkPrompt returns a Prompt implementation that always returns true without user interaction.\nfunc AlwaysOkPrompt() Prompt {\n\treturn func(message string, defaultValue bool) (bool, error) {\n\t\treturn true, nil\n\t}\n}\n\n// WithEventProcessor configure component to get notified on Compose operation and progress events.\n// Typically used to configure a progress UI\nfunc WithEventProcessor(bus api.EventProcessor) Option {\n\treturn func(s *composeService) error {\n\t\ts.events = bus\n\t\treturn nil\n\t}\n}\n\ntype composeService struct {\n\tdockerCli command.Cli\n\t// prompt is used to interact with user and confirm actions\n\tprompt Prompt\n\t// eventBus collects tasks execution events\n\tevents api.EventProcessor\n\n\t// Optional overrides for specific components (for SDK users)\n\toutStream   io.Writer\n\terrStream   io.Writer\n\tinStream    io.Reader\n\tcontextInfo api.ContextInfo\n\tproxyConfig map[string]string\n\n\tclock          clockwork.Clock\n\tmaxConcurrency int\n\tdryRun         bool\n}\n\n// Close releases any connections/resources held by the underlying clients.\n//\n// In practice, this service has the same lifetime as the process, so everything\n// will get cleaned up at about the same time regardless even if not invoked.\nfunc (s *composeService) Close() error {\n\tvar errs []error\n\tif s.dockerCli != nil {\n\t\terrs = append(errs, s.apiClient().Close())\n\t}\n\treturn errors.Join(errs...)\n}\n\nfunc (s *composeService) apiClient() client.APIClient {\n\treturn s.dockerCli.Client()\n}\n\nfunc (s *composeService) configFile() *configfile.ConfigFile {\n\treturn s.dockerCli.ConfigFile()\n}\n\n// getContextInfo returns the context info - either custom override or dockerCli adapter\nfunc (s *composeService) getContextInfo() api.ContextInfo {\n\tif s.contextInfo != nil {\n\t\treturn s.contextInfo\n\t}\n\treturn &dockerCliContextInfo{cli: s.dockerCli}\n}\n\n// getProxyConfig returns the proxy config - either custom override or environment-based\nfunc (s *composeService) getProxyConfig() map[string]string {\n\tif s.proxyConfig != nil {\n\t\treturn s.proxyConfig\n\t}\n\treturn storeutil.GetProxyConfig(s.dockerCli)\n}\n\nfunc (s *composeService) stdout() *streams.Out {\n\treturn s.dockerCli.Out()\n}\n\nfunc (s *composeService) stdin() *streams.In {\n\treturn s.dockerCli.In()\n}\n\nfunc (s *composeService) stderr() *streams.Out {\n\treturn s.dockerCli.Err()\n}\n\n// readCloserAdapter adapts io.Reader to io.ReadCloser\ntype readCloserAdapter struct {\n\tr io.Reader\n}\n\nfunc (r *readCloserAdapter) Read(p []byte) (int, error) {\n\treturn r.r.Read(p)\n}\n\nfunc (r *readCloserAdapter) Close() error {\n\treturn nil\n}\n\n// wrapDockerCliWithStreams wraps the Docker CLI to intercept and override stream methods\nfunc (s *composeService) wrapDockerCliWithStreams(baseCli command.Cli) command.Cli {\n\twrapper := &streamOverrideWrapper{\n\t\tCli: baseCli,\n\t}\n\n\t// Wrap custom streams in Docker CLI's stream types\n\tif s.outStream != nil {\n\t\twrapper.outStream = streams.NewOut(s.outStream)\n\t}\n\tif s.errStream != nil {\n\t\twrapper.errStream = streams.NewOut(s.errStream)\n\t}\n\tif s.inStream != nil {\n\t\twrapper.inStream = streams.NewIn(&readCloserAdapter{r: s.inStream})\n\t}\n\n\treturn wrapper\n}\n\n// streamOverrideWrapper wraps command.Cli to override streams with custom implementations\ntype streamOverrideWrapper struct {\n\tcommand.Cli\n\toutStream *streams.Out\n\terrStream *streams.Out\n\tinStream  *streams.In\n}\n\nfunc (w *streamOverrideWrapper) Out() *streams.Out {\n\tif w.outStream != nil {\n\t\treturn w.outStream\n\t}\n\treturn w.Cli.Out()\n}\n\nfunc (w *streamOverrideWrapper) Err() *streams.Out {\n\tif w.errStream != nil {\n\t\treturn w.errStream\n\t}\n\treturn w.Cli.Err()\n}\n\nfunc (w *streamOverrideWrapper) In() *streams.In {\n\tif w.inStream != nil {\n\t\treturn w.inStream\n\t}\n\treturn w.Cli.In()\n}\n\nfunc getCanonicalContainerName(c container.Summary) string {\n\tif len(c.Names) == 0 {\n\t\t// corner case, sometime happens on removal. return short ID as a safeguard value\n\t\treturn c.ID[:12]\n\t}\n\t// Names return container canonical name /foo  + link aliases /linked_by/foo\n\tfor _, name := range c.Names {\n\t\tif strings.LastIndex(name, \"/\") == 0 {\n\t\t\treturn name[1:]\n\t\t}\n\t}\n\n\treturn strings.TrimPrefix(c.Names[0], \"/\")\n}\n\nfunc getContainerNameWithoutProject(c container.Summary) string {\n\tproject := c.Labels[api.ProjectLabel]\n\tdefaultName := getDefaultContainerName(project, c.Labels[api.ServiceLabel], c.Labels[api.ContainerNumberLabel])\n\tname := getCanonicalContainerName(c)\n\tif name != defaultName {\n\t\t// service declares a custom container_name\n\t\treturn name\n\t}\n\treturn name[len(project)+1:]\n}\n\n// projectFromName builds a types.Project based on actual resources with compose labels set\nfunc (s *composeService) projectFromName(containers Containers, projectName string, services ...string) (*types.Project, error) {\n\tproject := &types.Project{\n\t\tName:     projectName,\n\t\tServices: types.Services{},\n\t}\n\tif len(containers) == 0 {\n\t\treturn project, fmt.Errorf(\"no container found for project %q: %w\", projectName, api.ErrNotFound)\n\t}\n\tset := types.Services{}\n\tfor _, c := range containers {\n\t\tserviceLabel, ok := c.Labels[api.ServiceLabel]\n\t\tif !ok {\n\t\t\tserviceLabel = getCanonicalContainerName(c)\n\t\t}\n\t\tservice, ok := set[serviceLabel]\n\t\tif !ok {\n\t\t\tservice = types.ServiceConfig{\n\t\t\t\tName:   serviceLabel,\n\t\t\t\tImage:  c.Image,\n\t\t\t\tLabels: c.Labels,\n\t\t\t}\n\t\t}\n\t\tservice.Scale = increment(service.Scale)\n\t\tset[serviceLabel] = service\n\t}\n\tfor name, service := range set {\n\t\tdependencies := service.Labels[api.DependenciesLabel]\n\t\tif dependencies != \"\" {\n\t\t\tservice.DependsOn = types.DependsOnConfig{}\n\t\t\tfor dc := range strings.SplitSeq(dependencies, \",\") {\n\t\t\t\tdcArr := strings.Split(dc, \":\")\n\t\t\t\tcondition := ServiceConditionRunningOrHealthy\n\t\t\t\t// Let's restart the dependency by default if we don't have the info stored in the label\n\t\t\t\trestart := true\n\t\t\t\trequired := true\n\t\t\t\tdependency := dcArr[0]\n\n\t\t\t\t// backward compatibility\n\t\t\t\tif len(dcArr) > 1 {\n\t\t\t\t\tcondition = dcArr[1]\n\t\t\t\t\tif len(dcArr) > 2 {\n\t\t\t\t\t\trestart, _ = strconv.ParseBool(dcArr[2])\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tservice.DependsOn[dependency] = types.ServiceDependency{Condition: condition, Restart: restart, Required: required}\n\t\t\t}\n\t\t\tset[name] = service\n\t\t}\n\t}\n\tproject.Services = set\n\nSERVICES:\n\tfor _, qs := range services {\n\t\tfor _, es := range project.Services {\n\t\t\tif es.Name == qs {\n\t\t\t\tcontinue SERVICES\n\t\t\t}\n\t\t}\n\t\treturn project, fmt.Errorf(\"no such service: %q: %w\", qs, api.ErrNotFound)\n\t}\n\tproject, err := project.WithSelectedServices(services)\n\tif err != nil {\n\t\treturn project, err\n\t}\n\n\treturn project, nil\n}\n\nfunc increment(scale *int) *int {\n\ti := 1\n\tif scale != nil {\n\t\ti = *scale + 1\n\t}\n\treturn &i\n}\n\nfunc (s *composeService) actualVolumes(ctx context.Context, projectName string) (types.Volumes, error) {\n\topts := client.VolumeListOptions{\n\t\tFilters: projectFilter(projectName),\n\t}\n\tvolumes, err := s.apiClient().VolumeList(ctx, opts)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tactual := types.Volumes{}\n\tfor _, vol := range volumes.Items {\n\t\tactual[vol.Labels[api.VolumeLabel]] = types.VolumeConfig{\n\t\t\tName:   vol.Name,\n\t\t\tDriver: vol.Driver,\n\t\t\tLabels: vol.Labels,\n\t\t}\n\t}\n\treturn actual, nil\n}\n\nfunc (s *composeService) actualNetworks(ctx context.Context, projectName string) (types.Networks, error) {\n\tnetworks, err := s.apiClient().NetworkList(ctx, client.NetworkListOptions{\n\t\tFilters: projectFilter(projectName),\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tactual := types.Networks{}\n\tfor _, net := range networks.Items {\n\t\tactual[net.Labels[api.NetworkLabel]] = types.NetworkConfig{\n\t\t\tName:   net.Name,\n\t\t\tDriver: net.Driver,\n\t\t\tLabels: net.Labels,\n\t\t}\n\t}\n\treturn actual, nil\n}\n\nvar swarmEnabled = struct {\n\tonce sync.Once\n\tval  bool\n\terr  error\n}{}\n\nfunc (s *composeService) isSwarmEnabled(ctx context.Context) (bool, error) {\n\tswarmEnabled.once.Do(func() {\n\t\tres, err := s.apiClient().Info(ctx, client.InfoOptions{})\n\t\tif err != nil {\n\t\t\tswarmEnabled.err = err\n\t\t}\n\t\tswitch res.Info.Swarm.LocalNodeState {\n\t\tcase swarm.LocalNodeStateInactive, swarm.LocalNodeStateLocked:\n\t\t\tswarmEnabled.val = false\n\t\tdefault:\n\t\t\tswarmEnabled.val = true\n\t\t}\n\t})\n\treturn swarmEnabled.val, swarmEnabled.err\n}\n\ntype runtimeVersionCache struct {\n\tonce sync.Once\n\tval  string\n\terr  error\n}\n\nvar runtimeVersion runtimeVersionCache\n\nfunc (s *composeService) RuntimeVersion(ctx context.Context) (string, error) {\n\t// TODO(thaJeztah): this should use Client.ClientVersion), which has the negotiated version.\n\truntimeVersion.once.Do(func() {\n\t\tversion, err := s.apiClient().ServerVersion(ctx, client.ServerVersionOptions{})\n\t\tif err != nil {\n\t\t\truntimeVersion.err = err\n\t\t}\n\t\truntimeVersion.val = version.APIVersion\n\t})\n\treturn runtimeVersion.val, runtimeVersion.err\n}\n"
  },
  {
    "path": "pkg/compose/container.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"io\"\n\n\t\"github.com/moby/moby/client\"\n)\n\nvar _ io.ReadCloser = ContainerStdout{}\n\n// ContainerStdout implement ReadCloser for moby.HijackedResponse\ntype ContainerStdout struct {\n\tclient.HijackedResponse\n}\n\n// Read implement io.ReadCloser\nfunc (l ContainerStdout) Read(p []byte) (n int, err error) {\n\treturn l.Reader.Read(p)\n}\n\n// Close implement io.ReadCloser\nfunc (l ContainerStdout) Close() error {\n\tl.HijackedResponse.Close()\n\treturn nil\n}\n\nvar _ io.WriteCloser = ContainerStdin{}\n\n// ContainerStdin implement WriteCloser for moby.HijackedResponse\ntype ContainerStdin struct {\n\tclient.HijackedResponse\n}\n\n// Write implement io.WriteCloser\nfunc (c ContainerStdin) Write(p []byte) (n int, err error) {\n\treturn c.Conn.Write(p)\n}\n\n// Close implement io.WriteCloser\nfunc (c ContainerStdin) Close() error {\n\treturn c.CloseWrite()\n}\n"
  },
  {
    "path": "pkg/compose/containers.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"slices\"\n\t\"sort\"\n\t\"strconv\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\n// Containers is a set of moby Container\ntype Containers []container.Summary\n\ntype oneOff int\n\nconst (\n\toneOffInclude = oneOff(iota)\n\toneOffExclude\n\toneOffOnly\n)\n\nfunc (s *composeService) getContainers(ctx context.Context, project string, oneOff oneOff, all bool, selectedServices ...string) (Containers, error) {\n\tres, err := s.apiClient().ContainerList(ctx, client.ContainerListOptions{\n\t\tFilters: getDefaultFilters(project, oneOff, selectedServices...),\n\t\tAll:     all,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcontainers := Containers(res.Items)\n\tif len(selectedServices) > 1 {\n\t\tcontainers = containers.filter(isService(selectedServices...))\n\t}\n\treturn containers, nil\n}\n\nfunc getDefaultFilters(projectName string, oneOff oneOff, selectedServices ...string) client.Filters {\n\tf := projectFilter(projectName)\n\tif len(selectedServices) == 1 {\n\t\tf.Add(\"label\", serviceFilter(selectedServices[0]))\n\t}\n\tf.Add(\"label\", hasConfigHashLabel())\n\tswitch oneOff {\n\tcase oneOffOnly:\n\t\tf.Add(\"label\", oneOffFilter(true))\n\tcase oneOffExclude:\n\t\tf.Add(\"label\", oneOffFilter(false))\n\tcase oneOffInclude:\n\t}\n\treturn f\n}\n\nfunc (s *composeService) getSpecifiedContainer(ctx context.Context, projectName string, oneOff oneOff, all bool, serviceName string, containerIndex int) (container.Summary, error) {\n\tdefaultFilters := getDefaultFilters(projectName, oneOff, serviceName)\n\tif containerIndex > 0 {\n\t\tdefaultFilters.Add(\"label\", containerNumberFilter(containerIndex))\n\t}\n\tres, err := s.apiClient().ContainerList(ctx, client.ContainerListOptions{\n\t\tFilters: defaultFilters,\n\t\tAll:     all,\n\t})\n\tif err != nil {\n\t\treturn container.Summary{}, err\n\t}\n\tcontainers := res.Items\n\tif len(containers) < 1 {\n\t\tif containerIndex > 0 {\n\t\t\treturn container.Summary{}, fmt.Errorf(\"service %q is not running container #%d\", serviceName, containerIndex)\n\t\t}\n\t\treturn container.Summary{}, fmt.Errorf(\"service %q is not running\", serviceName)\n\t}\n\n\t// Sort by container number first, then put one-off containers at the end\n\tsort.Slice(containers, func(i, j int) bool {\n\t\tnumberLabelX, _ := strconv.Atoi(containers[i].Labels[api.ContainerNumberLabel])\n\t\tnumberLabelY, _ := strconv.Atoi(containers[j].Labels[api.ContainerNumberLabel])\n\t\tIsOneOffLabelTrueX := containers[i].Labels[api.OneoffLabel] == \"True\"\n\t\tIsOneOffLabelTrueY := containers[j].Labels[api.OneoffLabel] == \"True\"\n\n\t\tif IsOneOffLabelTrueX || IsOneOffLabelTrueY {\n\t\t\treturn !IsOneOffLabelTrueX && IsOneOffLabelTrueY\n\t\t}\n\n\t\treturn numberLabelX < numberLabelY\n\t})\n\treturn containers[0], nil\n}\n\n// containerPredicate define a predicate we want container to satisfy for filtering operations\ntype containerPredicate func(c container.Summary) bool\n\nfunc matches(c container.Summary, predicates ...containerPredicate) bool {\n\tfor _, predicate := range predicates {\n\t\tif !predicate(c) {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\nfunc isService(services ...string) containerPredicate {\n\treturn func(c container.Summary) bool {\n\t\tservice := c.Labels[api.ServiceLabel]\n\t\treturn slices.Contains(services, service)\n\t}\n}\n\n// isOrphaned is a predicate to select containers without a matching service definition in compose project\nfunc isOrphaned(project *types.Project) containerPredicate {\n\tservices := append(project.ServiceNames(), project.DisabledServiceNames()...)\n\treturn func(c container.Summary) bool {\n\t\t// One-off container\n\t\tv, ok := c.Labels[api.OneoffLabel]\n\t\tif ok && v == \"True\" {\n\t\t\treturn c.State == container.StateExited || c.State == container.StateDead\n\t\t}\n\t\t// Service that is not defined in the compose model\n\t\tservice := c.Labels[api.ServiceLabel]\n\t\treturn !slices.Contains(services, service)\n\t}\n}\n\nfunc isNotOneOff(c container.Summary) bool {\n\tv, ok := c.Labels[api.OneoffLabel]\n\treturn !ok || v == \"False\"\n}\n\n// filter return Containers with elements to match predicate\nfunc (containers Containers) filter(predicates ...containerPredicate) Containers {\n\tvar filtered Containers\n\tfor _, c := range containers {\n\t\tif matches(c, predicates...) {\n\t\t\tfiltered = append(filtered, c)\n\t\t}\n\t}\n\treturn filtered\n}\n\nfunc (containers Containers) names() []string {\n\tvar names []string\n\tfor _, c := range containers {\n\t\tnames = append(names, getCanonicalContainerName(c))\n\t}\n\treturn names\n}\n\nfunc (containers Containers) forEach(fn func(container.Summary)) {\n\tfor _, c := range containers {\n\t\tfn(c)\n\t}\n}\n\nfunc (containers Containers) sorted() Containers {\n\tsort.Slice(containers, func(i, j int) bool {\n\t\treturn getCanonicalContainerName(containers[i]) < getCanonicalContainerName(containers[j])\n\t})\n\treturn containers\n}\n"
  },
  {
    "path": "pkg/compose/convergence.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"maps\"\n\t\"slices\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/platforms\"\n\t\"github.com/moby/moby/api/types/container\"\n\tmmount \"github.com/moby/moby/api/types/mount\"\n\t\"github.com/moby/moby/client\"\n\tspecs \"github.com/opencontainers/image-spec/specs-go/v1\"\n\t\"github.com/sirupsen/logrus\"\n\t\"go.opentelemetry.io/otel/attribute\"\n\t\"go.opentelemetry.io/otel/trace\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/internal/tracing\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\nconst (\n\tdoubledContainerNameWarning = \"WARNING: The %q service is using the custom container name %q. \" +\n\t\t\"Docker requires each container to have a unique name. \" +\n\t\t\"Remove the custom name to scale the service\"\n)\n\n// convergence manages service's container lifecycle.\n// Based on initially observed state, it reconciles the existing container with desired state, which might include\n// re-creating container, adding or removing replicas, or starting stopped containers.\n// Cross services dependencies are managed by creating services in expected order and updating `service:xx` reference\n// when a service has converged, so dependent ones can be managed with resolved containers references.\ntype convergence struct {\n\tcompose    *composeService\n\tservices   map[string]Containers\n\tnetworks   map[string]string\n\tvolumes    map[string]string\n\tstateMutex sync.Mutex\n}\n\nfunc (c *convergence) getObservedState(serviceName string) Containers {\n\tc.stateMutex.Lock()\n\tdefer c.stateMutex.Unlock()\n\treturn c.services[serviceName]\n}\n\nfunc (c *convergence) setObservedState(serviceName string, containers Containers) {\n\tc.stateMutex.Lock()\n\tdefer c.stateMutex.Unlock()\n\tc.services[serviceName] = containers\n}\n\nfunc newConvergence(services []string, state Containers, networks map[string]string, volumes map[string]string, s *composeService) *convergence {\n\tobservedState := map[string]Containers{}\n\tfor _, s := range services {\n\t\tobservedState[s] = Containers{}\n\t}\n\tfor _, c := range state.filter(isNotOneOff) {\n\t\tservice := c.Labels[api.ServiceLabel]\n\t\tobservedState[service] = append(observedState[service], c)\n\t}\n\treturn &convergence{\n\t\tcompose:  s,\n\t\tservices: observedState,\n\t\tnetworks: networks,\n\t\tvolumes:  volumes,\n\t}\n}\n\nfunc (c *convergence) apply(ctx context.Context, project *types.Project, options api.CreateOptions) error {\n\treturn InDependencyOrder(ctx, project, func(ctx context.Context, name string) error {\n\t\tservice, err := project.GetService(name)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn tracing.SpanWrapFunc(\"service/apply\", tracing.ServiceOptions(service), func(ctx context.Context) error {\n\t\t\tstrategy := options.RecreateDependencies\n\t\t\tif slices.Contains(options.Services, name) {\n\t\t\t\tstrategy = options.Recreate\n\t\t\t}\n\t\t\treturn c.ensureService(ctx, project, service, strategy, options.Inherit, options.Timeout)\n\t\t})(ctx)\n\t})\n}\n\nfunc (c *convergence) ensureService(ctx context.Context, project *types.Project, service types.ServiceConfig, recreate string, inherit bool, timeout *time.Duration) error { //nolint:gocyclo\n\tif service.Provider != nil {\n\t\treturn c.compose.runPlugin(ctx, project, service, \"up\")\n\t}\n\texpected, err := getScale(service)\n\tif err != nil {\n\t\treturn err\n\t}\n\tcontainers := c.getObservedState(service.Name)\n\tactual := len(containers)\n\tupdated := make(Containers, expected)\n\n\teg, ctx := errgroup.WithContext(ctx)\n\n\terr = c.resolveServiceReferences(&service)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tsort.Slice(containers, func(i, j int) bool {\n\t\t// select obsolete containers first, so they get removed as we scale down\n\t\tif obsolete, _ := c.mustRecreate(service, containers[i], recreate); obsolete {\n\t\t\t// i is obsolete, so must be first in the list\n\t\t\treturn true\n\t\t}\n\t\tif obsolete, _ := c.mustRecreate(service, containers[j], recreate); obsolete {\n\t\t\t// j is obsolete, so must be first in the list\n\t\t\treturn false\n\t\t}\n\n\t\t// For up-to-date containers, sort by container number to preserve low-values in container numbers\n\t\tni, erri := strconv.Atoi(containers[i].Labels[api.ContainerNumberLabel])\n\t\tnj, errj := strconv.Atoi(containers[j].Labels[api.ContainerNumberLabel])\n\t\tif erri == nil && errj == nil {\n\t\t\treturn ni > nj\n\t\t}\n\n\t\t// If we don't get a container number (?) just sort by creation date\n\t\treturn containers[i].Created < containers[j].Created\n\t})\n\n\tslices.Reverse(containers)\n\tfor i, ctr := range containers {\n\t\tif i >= expected {\n\t\t\t// Scale Down\n\t\t\t// As we sorted containers, obsolete ones and/or highest number will be removed\n\t\t\tctr := ctr\n\t\t\ttraceOpts := append(tracing.ServiceOptions(service), tracing.ContainerOptions(ctr)...)\n\t\t\teg.Go(tracing.SpanWrapFuncForErrGroup(ctx, \"service/scale/down\", traceOpts, func(ctx context.Context) error {\n\t\t\t\treturn c.compose.stopAndRemoveContainer(ctx, ctr, &service, timeout, false)\n\t\t\t}))\n\t\t\tcontinue\n\t\t}\n\n\t\tmustRecreate, err := c.mustRecreate(service, ctr, recreate)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif mustRecreate {\n\t\t\terr := c.stopDependentContainers(ctx, project, service)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\ti, ctr := i, ctr\n\t\t\teg.Go(tracing.SpanWrapFuncForErrGroup(ctx, \"container/recreate\", tracing.ContainerOptions(ctr), func(ctx context.Context) error {\n\t\t\t\trecreated, err := c.compose.recreateContainer(ctx, project, service, ctr, inherit, timeout)\n\t\t\t\tupdated[i] = recreated\n\t\t\t\treturn err\n\t\t\t}))\n\t\t\tcontinue\n\t\t}\n\n\t\t// Enforce non-diverged containers are running\n\t\tname := getContainerProgressName(ctr)\n\t\tswitch ctr.State {\n\t\tcase container.StateRunning:\n\t\t\tc.compose.events.On(runningEvent(name))\n\t\tcase container.StateCreated:\n\t\tcase container.StateRestarting:\n\t\tcase container.StateExited:\n\t\tdefault:\n\t\t\tctr := ctr\n\t\t\teg.Go(tracing.EventWrapFuncForErrGroup(ctx, \"service/start\", tracing.ContainerOptions(ctr), func(ctx context.Context) error {\n\t\t\t\treturn c.compose.startContainer(ctx, ctr)\n\t\t\t}))\n\t\t}\n\t\tupdated[i] = ctr\n\t}\n\n\tnext := nextContainerNumber(containers)\n\tfor i := 0; i < expected-actual; i++ {\n\t\t// Scale UP\n\t\tnumber := next + i\n\t\tname := getContainerName(project.Name, service, number)\n\t\teventOpts := tracing.SpanOptions{trace.WithAttributes(attribute.String(\"container.name\", name))}\n\t\teg.Go(tracing.EventWrapFuncForErrGroup(ctx, \"service/scale/up\", eventOpts, func(ctx context.Context) error {\n\t\t\topts := createOptions{\n\t\t\t\tAutoRemove:        false,\n\t\t\t\tAttachStdin:       false,\n\t\t\t\tUseNetworkAliases: true,\n\t\t\t\tLabels:            mergeLabels(service.Labels, service.CustomLabels),\n\t\t\t}\n\t\t\tctr, err := c.compose.createContainer(ctx, project, service, name, number, opts)\n\t\t\tupdated[actual+i] = ctr\n\t\t\treturn err\n\t\t}))\n\t\tcontinue\n\t}\n\n\terr = eg.Wait()\n\tc.setObservedState(service.Name, updated)\n\treturn err\n}\n\nfunc (c *convergence) stopDependentContainers(ctx context.Context, project *types.Project, service types.ServiceConfig) error {\n\t// Stop dependent containers, so they will be restarted after service is re-created\n\tdependents := project.GetDependentsForService(service, func(dependency types.ServiceDependency) bool {\n\t\treturn dependency.Restart\n\t})\n\tif len(dependents) == 0 {\n\t\treturn nil\n\t}\n\terr := c.compose.stop(ctx, project.Name, api.StopOptions{\n\t\tServices: dependents,\n\t\tProject:  project,\n\t}, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfor _, name := range dependents {\n\t\tdependentStates := c.getObservedState(name)\n\t\tfor i, dependent := range dependentStates {\n\t\t\tdependent.State = container.StateExited\n\t\t\tdependentStates[i] = dependent\n\t\t}\n\t\tc.setObservedState(name, dependentStates)\n\t}\n\treturn nil\n}\n\nfunc getScale(config types.ServiceConfig) (int, error) {\n\tscale := config.GetScale()\n\tif scale > 1 && config.ContainerName != \"\" {\n\t\treturn 0, fmt.Errorf(doubledContainerNameWarning,\n\t\t\tconfig.Name,\n\t\t\tconfig.ContainerName)\n\t}\n\treturn scale, nil\n}\n\n// resolveServiceReferences replaces reference to another service with reference to an actual container\nfunc (c *convergence) resolveServiceReferences(service *types.ServiceConfig) error {\n\terr := c.resolveVolumeFrom(service)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = c.resolveSharedNamespaces(service)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc (c *convergence) resolveVolumeFrom(service *types.ServiceConfig) error {\n\tfor i, vol := range service.VolumesFrom {\n\t\tspec := strings.Split(vol, \":\")\n\t\tif len(spec) == 0 {\n\t\t\tcontinue\n\t\t}\n\t\tif spec[0] == \"container\" {\n\t\t\tservice.VolumesFrom[i] = spec[1]\n\t\t\tcontinue\n\t\t}\n\t\tname := spec[0]\n\t\tdependencies := c.getObservedState(name)\n\t\tif len(dependencies) == 0 {\n\t\t\treturn fmt.Errorf(\"cannot share volume with service %s: container missing\", name)\n\t\t}\n\t\tservice.VolumesFrom[i] = dependencies.sorted()[0].ID\n\t}\n\treturn nil\n}\n\nfunc (c *convergence) resolveSharedNamespaces(service *types.ServiceConfig) error {\n\tstr := service.NetworkMode\n\tif name := getDependentServiceFromMode(str); name != \"\" {\n\t\tdependencies := c.getObservedState(name)\n\t\tif len(dependencies) == 0 {\n\t\t\treturn fmt.Errorf(\"cannot share network namespace with service %s: container missing\", name)\n\t\t}\n\t\tservice.NetworkMode = types.ContainerPrefix + dependencies.sorted()[0].ID\n\t}\n\n\tstr = service.Ipc\n\tif name := getDependentServiceFromMode(str); name != \"\" {\n\t\tdependencies := c.getObservedState(name)\n\t\tif len(dependencies) == 0 {\n\t\t\treturn fmt.Errorf(\"cannot share IPC namespace with service %s: container missing\", name)\n\t\t}\n\t\tservice.Ipc = types.ContainerPrefix + dependencies.sorted()[0].ID\n\t}\n\n\tstr = service.Pid\n\tif name := getDependentServiceFromMode(str); name != \"\" {\n\t\tdependencies := c.getObservedState(name)\n\t\tif len(dependencies) == 0 {\n\t\t\treturn fmt.Errorf(\"cannot share PID namespace with service %s: container missing\", name)\n\t\t}\n\t\tservice.Pid = types.ContainerPrefix + dependencies.sorted()[0].ID\n\t}\n\n\treturn nil\n}\n\nfunc (c *convergence) mustRecreate(expected types.ServiceConfig, actual container.Summary, policy string) (bool, error) {\n\tif policy == api.RecreateNever {\n\t\treturn false, nil\n\t}\n\tif policy == api.RecreateForce {\n\t\treturn true, nil\n\t}\n\tconfigHash, err := ServiceHash(expected)\n\tif err != nil {\n\t\treturn false, err\n\t}\n\tconfigChanged := actual.Labels[api.ConfigHashLabel] != configHash\n\timageUpdated := actual.Labels[api.ImageDigestLabel] != expected.CustomLabels[api.ImageDigestLabel]\n\tif configChanged || imageUpdated {\n\t\treturn true, nil\n\t}\n\n\tif c.networks != nil && actual.State == \"running\" {\n\t\tif checkExpectedNetworks(expected, actual, c.networks) {\n\t\t\treturn true, nil\n\t\t}\n\t}\n\n\tif c.volumes != nil {\n\t\tif checkExpectedVolumes(expected, actual, c.volumes) {\n\t\t\treturn true, nil\n\t\t}\n\t}\n\n\treturn false, nil\n}\n\nfunc checkExpectedNetworks(expected types.ServiceConfig, actual container.Summary, networks map[string]string) bool {\n\t// check the networks container is connected to are the expected ones\n\tfor net := range expected.Networks {\n\t\tid := networks[net]\n\t\tif id == \"swarm\" {\n\t\t\t// corner-case : swarm overlay network isn't visible until a container is attached\n\t\t\tcontinue\n\t\t}\n\t\tfound := false\n\t\tfor _, settings := range actual.NetworkSettings.Networks {\n\t\t\tif settings.NetworkID == id {\n\t\t\t\tfound = true\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tif !found {\n\t\t\t// config is up-to-date but container is not connected to network\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\nfunc checkExpectedVolumes(expected types.ServiceConfig, actual container.Summary, volumes map[string]string) bool {\n\t// check container's volume mounts and search for the expected ones\n\tfor _, vol := range expected.Volumes {\n\t\tif vol.Type != string(mmount.TypeVolume) {\n\t\t\tcontinue\n\t\t}\n\t\tif vol.Source == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tid := volumes[vol.Source]\n\t\tfound := false\n\t\tfor _, mount := range actual.Mounts {\n\t\t\tif mount.Type != mmount.TypeVolume {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif mount.Name == id {\n\t\t\t\tfound = true\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tif !found {\n\t\t\t// config is up-to-date but container doesn't have volume mounted\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\nfunc getContainerName(projectName string, service types.ServiceConfig, number int) string {\n\tname := getDefaultContainerName(projectName, service.Name, strconv.Itoa(number))\n\tif service.ContainerName != \"\" {\n\t\tname = service.ContainerName\n\t}\n\treturn name\n}\n\nfunc getDefaultContainerName(projectName, serviceName, index string) string {\n\treturn strings.Join([]string{projectName, serviceName, index}, api.Separator)\n}\n\nfunc getContainerProgressName(ctr container.Summary) string {\n\treturn \"Container \" + getCanonicalContainerName(ctr)\n}\n\nfunc containerEvents(containers Containers, eventFunc func(string) api.Resource) []api.Resource {\n\tevents := []api.Resource{}\n\tfor _, ctr := range containers {\n\t\tevents = append(events, eventFunc(getContainerProgressName(ctr)))\n\t}\n\treturn events\n}\n\nfunc containerReasonEvents(containers Containers, eventFunc func(string, string) api.Resource, reason string) []api.Resource {\n\tevents := []api.Resource{}\n\tfor _, ctr := range containers {\n\t\tevents = append(events, eventFunc(getContainerProgressName(ctr), reason))\n\t}\n\treturn events\n}\n\n// ServiceConditionRunningOrHealthy is a service condition on status running or healthy\nconst ServiceConditionRunningOrHealthy = \"running_or_healthy\"\n\n//nolint:gocyclo\nfunc (s *composeService) waitDependencies(ctx context.Context, project *types.Project, dependant string, dependencies types.DependsOnConfig, containers Containers, timeout time.Duration) error {\n\tif timeout > 0 {\n\t\twithTimeout, cancelFunc := context.WithTimeout(ctx, timeout)\n\t\tdefer cancelFunc()\n\t\tctx = withTimeout\n\t}\n\teg, ctx := errgroup.WithContext(ctx)\n\tfor dep, config := range dependencies {\n\t\tif shouldWait, err := shouldWaitForDependency(dep, config, project); err != nil {\n\t\t\treturn err\n\t\t} else if !shouldWait {\n\t\t\tcontinue\n\t\t}\n\n\t\twaitingFor := containers.filter(isService(dep), isNotOneOff)\n\t\ts.events.On(containerEvents(waitingFor, waiting)...)\n\t\tif len(waitingFor) == 0 {\n\t\t\tif config.Required {\n\t\t\t\treturn fmt.Errorf(\"%s is missing dependency %s\", dependant, dep)\n\t\t\t}\n\t\t\tlogrus.Warnf(\"%s is missing dependency %s\", dependant, dep)\n\t\t\tcontinue\n\t\t}\n\n\t\teg.Go(func() error {\n\t\t\tticker := time.NewTicker(500 * time.Millisecond)\n\t\t\tdefer ticker.Stop()\n\t\t\tfor {\n\t\t\t\tselect {\n\t\t\t\tcase <-ticker.C:\n\t\t\t\tcase <-ctx.Done():\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\t\t\t\tswitch config.Condition {\n\t\t\t\tcase ServiceConditionRunningOrHealthy:\n\t\t\t\t\tisHealthy, err := s.isServiceHealthy(ctx, waitingFor, true)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tif !config.Required {\n\t\t\t\t\t\t\ts.events.On(containerReasonEvents(waitingFor, skippedEvent,\n\t\t\t\t\t\t\t\tfmt.Sprintf(\"optional dependency %q is not running or is unhealthy\", dep))...)\n\t\t\t\t\t\t\tlogrus.Warnf(\"optional dependency %q is not running or is unhealthy: %s\", dep, err.Error())\n\t\t\t\t\t\t\treturn nil\n\t\t\t\t\t\t}\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif isHealthy {\n\t\t\t\t\t\ts.events.On(containerEvents(waitingFor, healthy)...)\n\t\t\t\t\t\treturn nil\n\t\t\t\t\t}\n\t\t\t\tcase types.ServiceConditionHealthy:\n\t\t\t\t\tisHealthy, err := s.isServiceHealthy(ctx, waitingFor, false)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tif !config.Required {\n\t\t\t\t\t\t\ts.events.On(containerReasonEvents(waitingFor, skippedEvent,\n\t\t\t\t\t\t\t\tfmt.Sprintf(\"optional dependency %q failed to start\", dep))...)\n\t\t\t\t\t\t\tlogrus.Warnf(\"optional dependency %q failed to start: %s\", dep, err.Error())\n\t\t\t\t\t\t\treturn nil\n\t\t\t\t\t\t}\n\t\t\t\t\t\ts.events.On(containerEvents(waitingFor, func(s string) api.Resource {\n\t\t\t\t\t\t\treturn errorEventf(s, \"dependency %s failed to start\", dep)\n\t\t\t\t\t\t})...)\n\t\t\t\t\t\treturn fmt.Errorf(\"dependency failed to start: %w\", err)\n\t\t\t\t\t}\n\t\t\t\t\tif isHealthy {\n\t\t\t\t\t\ts.events.On(containerEvents(waitingFor, healthy)...)\n\t\t\t\t\t\treturn nil\n\t\t\t\t\t}\n\t\t\t\tcase types.ServiceConditionCompletedSuccessfully:\n\t\t\t\t\tisExited, code, err := s.isServiceCompleted(ctx, waitingFor)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif isExited {\n\t\t\t\t\t\tif code == 0 {\n\t\t\t\t\t\t\ts.events.On(containerEvents(waitingFor, exited)...)\n\t\t\t\t\t\t\treturn nil\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tmessageSuffix := fmt.Sprintf(\"%q didn't complete successfully: exit %d\", dep, code)\n\t\t\t\t\t\tif !config.Required {\n\t\t\t\t\t\t\t// optional -> mark as skipped & don't propagate error\n\t\t\t\t\t\t\ts.events.On(containerReasonEvents(waitingFor, skippedEvent,\n\t\t\t\t\t\t\t\tfmt.Sprintf(\"optional dependency %s\", messageSuffix))...)\n\t\t\t\t\t\t\tlogrus.Warnf(\"optional dependency %s\", messageSuffix)\n\t\t\t\t\t\t\treturn nil\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tmsg := fmt.Sprintf(\"service %s\", messageSuffix)\n\t\t\t\t\t\ts.events.On(containerEvents(waitingFor, func(s string) api.Resource {\n\t\t\t\t\t\t\treturn errorEventf(s, \"service %s\", messageSuffix)\n\t\t\t\t\t\t})...)\n\t\t\t\t\t\treturn errors.New(msg)\n\t\t\t\t\t}\n\t\t\t\tdefault:\n\t\t\t\t\tlogrus.Warnf(\"unsupported depends_on condition: %s\", config.Condition)\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n\terr := eg.Wait()\n\tif errors.Is(err, context.DeadlineExceeded) {\n\t\treturn fmt.Errorf(\"timeout waiting for dependencies\")\n\t}\n\treturn err\n}\n\nfunc shouldWaitForDependency(serviceName string, dependencyConfig types.ServiceDependency, project *types.Project) (bool, error) {\n\tif dependencyConfig.Condition == types.ServiceConditionStarted {\n\t\t// already managed by InDependencyOrder\n\t\treturn false, nil\n\t}\n\tif service, err := project.GetService(serviceName); err != nil {\n\t\tfor _, ds := range project.DisabledServices {\n\t\t\tif ds.Name == serviceName {\n\t\t\t\t// don't wait for disabled service (--no-deps)\n\t\t\t\treturn false, nil\n\t\t\t}\n\t\t}\n\t\treturn false, err\n\t} else if service.GetScale() == 0 {\n\t\t// don't wait for the dependency which configured to have 0 containers running\n\t\treturn false, nil\n\t} else if service.Provider != nil {\n\t\t// don't wait for provider services\n\t\treturn false, nil\n\t}\n\treturn true, nil\n}\n\nfunc nextContainerNumber(containers []container.Summary) int {\n\tmaxNumber := 0\n\tfor _, c := range containers {\n\t\ts, ok := c.Labels[api.ContainerNumberLabel]\n\t\tif !ok {\n\t\t\tlogrus.Warnf(\"container %s is missing %s label\", c.ID, api.ContainerNumberLabel)\n\t\t}\n\t\tn, err := strconv.Atoi(s)\n\t\tif err != nil {\n\t\t\tlogrus.Warnf(\"container %s has invalid %s label: %s\", c.ID, api.ContainerNumberLabel, s)\n\t\t\tcontinue\n\t\t}\n\t\tif n > maxNumber {\n\t\t\tmaxNumber = n\n\t\t}\n\t}\n\treturn maxNumber + 1\n}\n\nfunc (s *composeService) createContainer(ctx context.Context, project *types.Project, service types.ServiceConfig,\n\tname string, number int, opts createOptions,\n) (ctr container.Summary, err error) {\n\teventName := \"Container \" + name\n\ts.events.On(creatingEvent(eventName))\n\tctr, err = s.createMobyContainer(ctx, project, service, name, number, nil, opts)\n\tif err != nil {\n\t\tif ctx.Err() == nil {\n\t\t\ts.events.On(api.Resource{\n\t\t\t\tID:     eventName,\n\t\t\t\tStatus: api.Error,\n\t\t\t\tText:   err.Error(),\n\t\t\t})\n\t\t}\n\t\treturn ctr, err\n\t}\n\ts.events.On(createdEvent(eventName))\n\treturn ctr, nil\n}\n\nfunc (s *composeService) recreateContainer(ctx context.Context, project *types.Project, service types.ServiceConfig,\n\treplaced container.Summary, inherit bool, timeout *time.Duration,\n) (created container.Summary, err error) {\n\teventName := getContainerProgressName(replaced)\n\ts.events.On(newEvent(eventName, api.Working, \"Recreate\"))\n\tdefer func() {\n\t\tif err != nil && ctx.Err() == nil {\n\t\t\ts.events.On(api.Resource{\n\t\t\t\tID:     eventName,\n\t\t\t\tStatus: api.Error,\n\t\t\t\tText:   err.Error(),\n\t\t\t})\n\t\t}\n\t}()\n\n\tnumber, err := strconv.Atoi(replaced.Labels[api.ContainerNumberLabel])\n\tif err != nil {\n\t\treturn created, err\n\t}\n\n\tvar inherited *container.Summary\n\tif inherit {\n\t\tinherited = &replaced\n\t}\n\n\treplacedContainerName := service.ContainerName\n\tif replacedContainerName == \"\" {\n\t\treplacedContainerName = service.Name + api.Separator + strconv.Itoa(number)\n\t}\n\tname := getContainerName(project.Name, service, number)\n\ttmpName := fmt.Sprintf(\"%s_%s\", replaced.ID[:12], name)\n\topts := createOptions{\n\t\tAutoRemove:        false,\n\t\tAttachStdin:       false,\n\t\tUseNetworkAliases: true,\n\t\tLabels:            mergeLabels(service.Labels, service.CustomLabels).Add(api.ContainerReplaceLabel, replacedContainerName),\n\t}\n\tcreated, err = s.createMobyContainer(ctx, project, service, tmpName, number, inherited, opts)\n\tif err != nil {\n\t\treturn created, err\n\t}\n\n\ttimeoutInSecond := utils.DurationSecondToInt(timeout)\n\t_, err = s.apiClient().ContainerStop(ctx, replaced.ID, client.ContainerStopOptions{Timeout: timeoutInSecond})\n\tif err != nil {\n\t\treturn created, err\n\t}\n\n\t_, err = s.apiClient().ContainerRemove(ctx, replaced.ID, client.ContainerRemoveOptions{})\n\tif err != nil {\n\t\treturn created, err\n\t}\n\n\t_, err = s.apiClient().ContainerRename(ctx, tmpName, client.ContainerRenameOptions{\n\t\tNewName: name,\n\t})\n\tif err != nil {\n\t\treturn created, err\n\t}\n\n\ts.events.On(newEvent(eventName, api.Done, \"Recreated\"))\n\treturn created, err\n}\n\n// force sequential calls to ContainerStart to prevent race condition in engine assigning ports from ranges\nvar startMx sync.Mutex\n\nfunc (s *composeService) startContainer(ctx context.Context, ctr container.Summary) error {\n\ts.events.On(newEvent(getContainerProgressName(ctr), api.Working, \"Restart\"))\n\tstartMx.Lock()\n\tdefer startMx.Unlock()\n\t_, err := s.apiClient().ContainerStart(ctx, ctr.ID, client.ContainerStartOptions{})\n\tif err != nil {\n\t\treturn err\n\t}\n\ts.events.On(newEvent(getContainerProgressName(ctr), api.Done, \"Restarted\"))\n\treturn nil\n}\n\nfunc (s *composeService) createMobyContainer(ctx context.Context, project *types.Project, service types.ServiceConfig,\n\tname string, number int, inherit *container.Summary, opts createOptions,\n) (container.Summary, error) {\n\tvar created container.Summary\n\tcfgs, err := s.getCreateConfigs(ctx, project, service, number, inherit, opts)\n\tif err != nil {\n\t\treturn created, err\n\t}\n\tplatform := service.Platform\n\tif platform == \"\" {\n\t\tplatform = project.Environment[\"DOCKER_DEFAULT_PLATFORM\"]\n\t}\n\tvar plat *specs.Platform\n\tif platform != \"\" {\n\t\tvar p specs.Platform\n\t\tp, err = platforms.Parse(platform)\n\t\tif err != nil {\n\t\t\treturn created, err\n\t\t}\n\t\tplat = &p\n\t}\n\n\tresponse, err := s.apiClient().ContainerCreate(ctx, client.ContainerCreateOptions{\n\t\tName:             name,\n\t\tPlatform:         plat,\n\t\tConfig:           cfgs.Container,\n\t\tHostConfig:       cfgs.Host,\n\t\tNetworkingConfig: cfgs.Network,\n\t})\n\tif err != nil {\n\t\treturn created, err\n\t}\n\tfor _, warning := range response.Warnings {\n\t\ts.events.On(api.Resource{\n\t\t\tID:     service.Name,\n\t\t\tStatus: api.Warning,\n\t\t\tText:   warning,\n\t\t})\n\t}\n\tres, err := s.apiClient().ContainerInspect(ctx, response.ID, client.ContainerInspectOptions{})\n\tif err != nil {\n\t\treturn created, err\n\t}\n\tcreated = container.Summary{\n\t\tID:     res.Container.ID,\n\t\tLabels: res.Container.Config.Labels,\n\t\tNames:  []string{res.Container.Name},\n\t\tNetworkSettings: &container.NetworkSettingsSummary{\n\t\t\tNetworks: res.Container.NetworkSettings.Networks,\n\t\t},\n\t}\n\n\treturn created, nil\n}\n\n// getLinks mimics V1 compose/service.py::Service::_get_links()\nfunc (s *composeService) getLinks(ctx context.Context, projectName string, service types.ServiceConfig, number int) ([]string, error) {\n\tvar links []string\n\tformat := func(k, v string) string {\n\t\treturn fmt.Sprintf(\"%s:%s\", k, v)\n\t}\n\tgetServiceContainers := func(serviceName string) (Containers, error) {\n\t\treturn s.getContainers(ctx, projectName, oneOffExclude, true, serviceName)\n\t}\n\n\tfor _, rawLink := range service.Links {\n\t\t// linkName if informed like in: \"serviceName[:linkName]\"\n\t\tlinkServiceName, linkName, ok := strings.Cut(rawLink, \":\")\n\t\tif !ok {\n\t\t\tlinkName = linkServiceName\n\t\t}\n\t\tcnts, err := getServiceContainers(linkServiceName)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tfor _, c := range cnts {\n\t\t\tcontainerName := getCanonicalContainerName(c)\n\t\t\tlinks = append(links,\n\t\t\t\tformat(containerName, linkName),\n\t\t\t\tformat(containerName, linkServiceName+api.Separator+strconv.Itoa(number)),\n\t\t\t\tformat(containerName, strings.Join([]string{projectName, linkServiceName, strconv.Itoa(number)}, api.Separator)),\n\t\t\t)\n\t\t}\n\t}\n\n\tif service.Labels[api.OneoffLabel] == \"True\" {\n\t\tcnts, err := getServiceContainers(service.Name)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tfor _, c := range cnts {\n\t\t\tcontainerName := getCanonicalContainerName(c)\n\t\t\tlinks = append(links,\n\t\t\t\tformat(containerName, service.Name),\n\t\t\t\tformat(containerName, strings.TrimPrefix(containerName, projectName+api.Separator)),\n\t\t\t\tformat(containerName, containerName),\n\t\t\t)\n\t\t}\n\t}\n\n\tfor _, rawExtLink := range service.ExternalLinks {\n\t\texternalLink, linkName, ok := strings.Cut(rawExtLink, \":\")\n\t\tif !ok {\n\t\t\tlinkName = externalLink\n\t\t}\n\t\tlinks = append(links, format(externalLink, linkName))\n\t}\n\treturn links, nil\n}\n\nfunc (s *composeService) isServiceHealthy(ctx context.Context, containers Containers, fallbackRunning bool) (bool, error) {\n\tfor _, c := range containers {\n\t\tres, err := s.apiClient().ContainerInspect(ctx, c.ID, client.ContainerInspectOptions{})\n\t\tif err != nil {\n\t\t\treturn false, err\n\t\t}\n\t\tctr := res.Container\n\t\tname := ctr.Name[1:]\n\n\t\tif ctr.State.Status == container.StateExited {\n\t\t\treturn false, fmt.Errorf(\"container %s exited (%d)\", name, ctr.State.ExitCode)\n\t\t}\n\n\t\tnoHealthcheck := ctr.Config.Healthcheck == nil || (len(ctr.Config.Healthcheck.Test) > 0 && ctr.Config.Healthcheck.Test[0] == \"NONE\")\n\t\tif noHealthcheck && fallbackRunning {\n\t\t\t// Container does not define a health check, but we can fall back to \"running\" state\n\t\t\treturn ctr.State != nil && ctr.State.Status == container.StateRunning, nil\n\t\t}\n\n\t\tif ctr.State == nil || ctr.State.Health == nil {\n\t\t\treturn false, fmt.Errorf(\"container %s has no healthcheck configured\", name)\n\t\t}\n\t\tswitch ctr.State.Health.Status {\n\t\tcase container.Healthy:\n\t\t\t// Continue by checking the next container.\n\t\tcase container.Unhealthy:\n\t\t\treturn false, fmt.Errorf(\"container %s is unhealthy\", name)\n\t\tcase container.Starting:\n\t\t\treturn false, nil\n\t\tdefault:\n\t\t\treturn false, fmt.Errorf(\"container %s had unexpected health status %q\", name, ctr.State.Health.Status)\n\t\t}\n\t}\n\treturn true, nil\n}\n\nfunc (s *composeService) isServiceCompleted(ctx context.Context, containers Containers) (bool, int, error) {\n\tfor _, c := range containers {\n\t\tres, err := s.apiClient().ContainerInspect(ctx, c.ID, client.ContainerInspectOptions{})\n\t\tif err != nil {\n\t\t\treturn false, 0, err\n\t\t}\n\t\tif res.Container.State != nil && res.Container.State.Status == container.StateExited {\n\t\t\treturn true, res.Container.State.ExitCode, nil\n\t\t}\n\t}\n\treturn false, 0, nil\n}\n\nfunc (s *composeService) startService(ctx context.Context,\n\tproject *types.Project, service types.ServiceConfig,\n\tcontainers Containers, listener api.ContainerEventListener,\n\ttimeout time.Duration,\n) error {\n\tif service.Deploy != nil && service.Deploy.Replicas != nil && *service.Deploy.Replicas == 0 {\n\t\treturn nil\n\t}\n\n\terr := s.waitDependencies(ctx, project, service.Name, service.DependsOn, containers, timeout)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif len(containers) == 0 {\n\t\tif service.GetScale() == 0 {\n\t\t\treturn nil\n\t\t}\n\t\treturn fmt.Errorf(\"service %q has no container to start\", service.Name)\n\t}\n\n\tfor _, ctr := range containers.filter(isService(service.Name)) {\n\t\tif ctr.State == container.StateRunning {\n\t\t\tcontinue\n\t\t}\n\n\t\terr = s.injectSecrets(ctx, project, service, ctr.ID)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\terr = s.injectConfigs(ctx, project, service, ctr.ID)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\teventName := getContainerProgressName(ctr)\n\t\ts.events.On(startingEvent(eventName))\n\t\t_, err = s.apiClient().ContainerStart(ctx, ctr.ID, client.ContainerStartOptions{})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tfor _, hook := range service.PostStart {\n\t\t\terr = s.runHook(ctx, ctr, service, hook, listener)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\n\t\ts.events.On(startedEvent(eventName))\n\t}\n\treturn nil\n}\n\nfunc mergeLabels(ls ...types.Labels) types.Labels {\n\tmerged := types.Labels{}\n\tfor _, l := range ls {\n\t\tmaps.Copy(merged, l)\n\t}\n\treturn merged\n}\n"
  },
  {
    "path": "pkg/compose/convergence_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net/netip\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/config/configfile\"\n\t\"github.com/google/go-cmp/cmp/cmpopts\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/network\"\n\t\"github.com/moby/moby/client\"\n\t\"go.uber.org/mock/gomock\"\n\t\"gotest.tools/v3/assert\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/mocks\"\n)\n\nfunc TestContainerName(t *testing.T) {\n\ts := types.ServiceConfig{\n\t\tName:          \"testservicename\",\n\t\tContainerName: \"testcontainername\",\n\t\tScale:         intPtr(1),\n\t\tDeploy:        &types.DeployConfig{},\n\t}\n\tret, err := getScale(s)\n\tassert.NilError(t, err)\n\tassert.Equal(t, ret, *s.Scale)\n\n\ts.Scale = intPtr(0)\n\tret, err = getScale(s)\n\tassert.NilError(t, err)\n\tassert.Equal(t, ret, *s.Scale)\n\n\ts.Scale = intPtr(2)\n\t_, err = getScale(s)\n\tassert.Error(t, err, fmt.Sprintf(doubledContainerNameWarning, s.Name, s.ContainerName))\n}\n\nfunc intPtr(i int) *int {\n\treturn &i\n}\n\nfunc TestServiceLinks(t *testing.T) {\n\tconst dbContainerName = \"/\" + testProject + \"-db-1\"\n\tconst webContainerName = \"/\" + testProject + \"-web-1\"\n\ts := types.ServiceConfig{\n\t\tName:  \"web\",\n\t\tScale: intPtr(1),\n\t}\n\n\tcontainerListOptions := client.ContainerListOptions{\n\t\tFilters: projectFilter(testProject).Add(\"label\",\n\t\t\tserviceFilter(\"db\"),\n\t\t\toneOffFilter(false),\n\t\t\thasConfigHashLabel(),\n\t\t),\n\t\tAll: true,\n\t}\n\n\tt.Run(\"service links default\", func(t *testing.T) {\n\t\tmockCtrl := gomock.NewController(t)\n\t\tdefer mockCtrl.Finish()\n\n\t\tapiClient := mocks.NewMockAPIClient(mockCtrl)\n\t\tcli := mocks.NewMockCli(mockCtrl)\n\t\ttested, err := NewComposeService(cli)\n\t\tassert.NilError(t, err)\n\t\tcli.EXPECT().Client().Return(apiClient).AnyTimes()\n\n\t\ts.Links = []string{\"db\"}\n\n\t\tc := testContainer(\"db\", dbContainerName, false)\n\t\tapiClient.EXPECT().ContainerList(gomock.Any(), containerListOptions).Return(client.ContainerListResult{\n\t\t\tItems: []container.Summary{c},\n\t\t}, nil)\n\n\t\tlinks, err := tested.(*composeService).getLinks(t.Context(), testProject, s, 1)\n\t\tassert.NilError(t, err)\n\n\t\tassert.Equal(t, len(links), 3)\n\t\tassert.Equal(t, links[0], \"testProject-db-1:db\")\n\t\tassert.Equal(t, links[1], \"testProject-db-1:db-1\")\n\t\tassert.Equal(t, links[2], \"testProject-db-1:testProject-db-1\")\n\t})\n\n\tt.Run(\"service links\", func(t *testing.T) {\n\t\tmockCtrl := gomock.NewController(t)\n\t\tdefer mockCtrl.Finish()\n\t\tapiClient := mocks.NewMockAPIClient(mockCtrl)\n\t\tcli := mocks.NewMockCli(mockCtrl)\n\t\ttested, err := NewComposeService(cli)\n\t\tassert.NilError(t, err)\n\t\tcli.EXPECT().Client().Return(apiClient).AnyTimes()\n\n\t\ts.Links = []string{\"db:db\"}\n\n\t\tc := testContainer(\"db\", dbContainerName, false)\n\n\t\tapiClient.EXPECT().ContainerList(gomock.Any(), containerListOptions).Return(client.ContainerListResult{\n\t\t\tItems: []container.Summary{c},\n\t\t}, nil)\n\t\tlinks, err := tested.(*composeService).getLinks(t.Context(), testProject, s, 1)\n\t\tassert.NilError(t, err)\n\n\t\tassert.Equal(t, len(links), 3)\n\t\tassert.Equal(t, links[0], \"testProject-db-1:db\")\n\t\tassert.Equal(t, links[1], \"testProject-db-1:db-1\")\n\t\tassert.Equal(t, links[2], \"testProject-db-1:testProject-db-1\")\n\t})\n\n\tt.Run(\"service links name\", func(t *testing.T) {\n\t\tmockCtrl := gomock.NewController(t)\n\t\tdefer mockCtrl.Finish()\n\t\tapiClient := mocks.NewMockAPIClient(mockCtrl)\n\t\tcli := mocks.NewMockCli(mockCtrl)\n\t\ttested, err := NewComposeService(cli)\n\t\tassert.NilError(t, err)\n\t\tcli.EXPECT().Client().Return(apiClient).AnyTimes()\n\n\t\ts.Links = []string{\"db:dbname\"}\n\n\t\tc := testContainer(\"db\", dbContainerName, false)\n\t\tapiClient.EXPECT().ContainerList(gomock.Any(), containerListOptions).Return(client.ContainerListResult{\n\t\t\tItems: []container.Summary{c},\n\t\t}, nil)\n\n\t\tlinks, err := tested.(*composeService).getLinks(t.Context(), testProject, s, 1)\n\t\tassert.NilError(t, err)\n\n\t\tassert.Equal(t, len(links), 3)\n\t\tassert.Equal(t, links[0], \"testProject-db-1:dbname\")\n\t\tassert.Equal(t, links[1], \"testProject-db-1:db-1\")\n\t\tassert.Equal(t, links[2], \"testProject-db-1:testProject-db-1\")\n\t})\n\n\tt.Run(\"service links external links\", func(t *testing.T) {\n\t\tmockCtrl := gomock.NewController(t)\n\t\tdefer mockCtrl.Finish()\n\t\tapiClient := mocks.NewMockAPIClient(mockCtrl)\n\t\tcli := mocks.NewMockCli(mockCtrl)\n\t\ttested, err := NewComposeService(cli)\n\t\tassert.NilError(t, err)\n\t\tcli.EXPECT().Client().Return(apiClient).AnyTimes()\n\n\t\ts.Links = []string{\"db:dbname\"}\n\t\ts.ExternalLinks = []string{\"db1:db2\"}\n\n\t\tc := testContainer(\"db\", dbContainerName, false)\n\t\tapiClient.EXPECT().ContainerList(gomock.Any(), containerListOptions).Return(client.ContainerListResult{\n\t\t\tItems: []container.Summary{c},\n\t\t}, nil)\n\n\t\tlinks, err := tested.(*composeService).getLinks(t.Context(), testProject, s, 1)\n\t\tassert.NilError(t, err)\n\n\t\tassert.Equal(t, len(links), 4)\n\t\tassert.Equal(t, links[0], \"testProject-db-1:dbname\")\n\t\tassert.Equal(t, links[1], \"testProject-db-1:db-1\")\n\t\tassert.Equal(t, links[2], \"testProject-db-1:testProject-db-1\")\n\n\t\t// ExternalLink\n\t\tassert.Equal(t, links[3], \"db1:db2\")\n\t})\n\n\tt.Run(\"service links itself oneoff\", func(t *testing.T) {\n\t\tmockCtrl := gomock.NewController(t)\n\t\tdefer mockCtrl.Finish()\n\t\tapiClient := mocks.NewMockAPIClient(mockCtrl)\n\t\tcli := mocks.NewMockCli(mockCtrl)\n\t\ttested, err := NewComposeService(cli)\n\t\tassert.NilError(t, err)\n\t\tcli.EXPECT().Client().Return(apiClient).AnyTimes()\n\n\t\ts.Links = []string{}\n\t\ts.ExternalLinks = []string{}\n\t\ts.Labels = s.Labels.Add(api.OneoffLabel, \"True\")\n\n\t\tc := testContainer(\"web\", webContainerName, true)\n\t\tcontainerListOptionsOneOff := client.ContainerListOptions{\n\t\t\tFilters: projectFilter(testProject).Add(\"label\",\n\t\t\t\tserviceFilter(\"web\"),\n\t\t\t\toneOffFilter(false),\n\t\t\t\thasConfigHashLabel(),\n\t\t\t),\n\t\t\tAll: true,\n\t\t}\n\t\tapiClient.EXPECT().ContainerList(gomock.Any(), containerListOptionsOneOff).Return(client.ContainerListResult{\n\t\t\tItems: []container.Summary{c},\n\t\t}, nil)\n\n\t\tlinks, err := tested.(*composeService).getLinks(t.Context(), testProject, s, 1)\n\t\tassert.NilError(t, err)\n\n\t\tassert.Equal(t, len(links), 3)\n\t\tassert.Equal(t, links[0], \"testProject-web-1:web\")\n\t\tassert.Equal(t, links[1], \"testProject-web-1:web-1\")\n\t\tassert.Equal(t, links[2], \"testProject-web-1:testProject-web-1\")\n\t})\n}\n\nfunc TestWaitDependencies(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapiClient := mocks.NewMockAPIClient(mockCtrl)\n\tcli := mocks.NewMockCli(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\tcli.EXPECT().Client().Return(apiClient).AnyTimes()\n\n\tt.Run(\"should skip dependencies with scale 0\", func(t *testing.T) {\n\t\tdbService := types.ServiceConfig{Name: \"db\", Scale: intPtr(0)}\n\t\tredisService := types.ServiceConfig{Name: \"redis\", Scale: intPtr(0)}\n\t\tproject := types.Project{Name: strings.ToLower(testProject), Services: types.Services{\n\t\t\t\"db\":    dbService,\n\t\t\t\"redis\": redisService,\n\t\t}}\n\t\tdependencies := types.DependsOnConfig{\n\t\t\t\"db\":    {Condition: ServiceConditionRunningOrHealthy},\n\t\t\t\"redis\": {Condition: ServiceConditionRunningOrHealthy},\n\t\t}\n\t\tassert.NilError(t, tested.(*composeService).waitDependencies(t.Context(), &project, \"\", dependencies, nil, 0))\n\t})\n\tt.Run(\"should skip dependencies with condition service_started\", func(t *testing.T) {\n\t\tdbService := types.ServiceConfig{Name: \"db\", Scale: intPtr(1)}\n\t\tredisService := types.ServiceConfig{Name: \"redis\", Scale: intPtr(1)}\n\t\tproject := types.Project{Name: strings.ToLower(testProject), Services: types.Services{\n\t\t\t\"db\":    dbService,\n\t\t\t\"redis\": redisService,\n\t\t}}\n\t\tdependencies := types.DependsOnConfig{\n\t\t\t\"db\":    {Condition: types.ServiceConditionStarted, Required: true},\n\t\t\t\"redis\": {Condition: types.ServiceConditionStarted, Required: true},\n\t\t}\n\t\tassert.NilError(t, tested.(*composeService).waitDependencies(t.Context(), &project, \"\", dependencies, nil, 0))\n\t})\n}\n\nfunc TestIsServiceHealthy(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapiClient := mocks.NewMockAPIClient(mockCtrl)\n\tcli := mocks.NewMockCli(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\tcli.EXPECT().Client().Return(apiClient).AnyTimes()\n\n\tctx := t.Context()\n\n\tt.Run(\"disabled healthcheck with fallback to running\", func(t *testing.T) {\n\t\tcontainerID := \"test-container-id\"\n\t\tcontainers := Containers{\n\t\t\t{ID: containerID},\n\t\t}\n\n\t\t// Container with disabled healthcheck (Test: [\"NONE\"])\n\t\tapiClient.EXPECT().ContainerInspect(ctx, containerID, gomock.Any()).Return(client.ContainerInspectResult{\n\t\t\tContainer: container.InspectResponse{\n\t\t\t\tID:    containerID,\n\t\t\t\tName:  \"test-container\",\n\t\t\t\tState: &container.State{Status: \"running\"},\n\t\t\t\tConfig: &container.Config{\n\t\t\t\t\tHealthcheck: &container.HealthConfig{\n\t\t\t\t\t\tTest: []string{\"NONE\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t}, nil)\n\n\t\tisHealthy, err := tested.(*composeService).isServiceHealthy(ctx, containers, true)\n\t\tassert.NilError(t, err)\n\t\tassert.Equal(t, true, isHealthy, \"Container with disabled healthcheck should be considered healthy when running with fallbackRunning=true\")\n\t})\n\n\tt.Run(\"disabled healthcheck without fallback\", func(t *testing.T) {\n\t\tcontainerID := \"test-container-id\"\n\t\tcontainers := Containers{\n\t\t\t{ID: containerID},\n\t\t}\n\n\t\t// Container with disabled healthcheck (Test: [\"NONE\"]) but fallbackRunning=false\n\t\tapiClient.EXPECT().ContainerInspect(ctx, containerID, gomock.Any()).Return(client.ContainerInspectResult{\n\t\t\tContainer: container.InspectResponse{\n\t\t\t\tID:    containerID,\n\t\t\t\tName:  \"test-container\",\n\t\t\t\tState: &container.State{Status: \"running\"},\n\t\t\t\tConfig: &container.Config{\n\t\t\t\t\tHealthcheck: &container.HealthConfig{\n\t\t\t\t\t\tTest: []string{\"NONE\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t}, nil)\n\n\t\t_, err := tested.(*composeService).isServiceHealthy(ctx, containers, false)\n\t\tassert.ErrorContains(t, err, \"has no healthcheck configured\")\n\t})\n\n\tt.Run(\"no healthcheck with fallback to running\", func(t *testing.T) {\n\t\tcontainerID := \"test-container-id\"\n\t\tcontainers := Containers{\n\t\t\t{ID: containerID},\n\t\t}\n\n\t\t// Container with no healthcheck at all\n\t\tapiClient.EXPECT().ContainerInspect(ctx, containerID, gomock.Any()).Return(client.ContainerInspectResult{\n\t\t\tContainer: container.InspectResponse{\n\t\t\t\tID:    containerID,\n\t\t\t\tName:  \"test-container\",\n\t\t\t\tState: &container.State{Status: \"running\"},\n\t\t\t\tConfig: &container.Config{\n\t\t\t\t\tHealthcheck: nil,\n\t\t\t\t},\n\t\t\t},\n\t\t}, nil)\n\n\t\tisHealthy, err := tested.(*composeService).isServiceHealthy(ctx, containers, true)\n\t\tassert.NilError(t, err)\n\t\tassert.Equal(t, true, isHealthy, \"Container with no healthcheck should be considered healthy when running with fallbackRunning=true\")\n\t})\n\n\tt.Run(\"exited container with disabled healthcheck\", func(t *testing.T) {\n\t\tcontainerID := \"test-container-id\"\n\t\tcontainers := Containers{\n\t\t\t{ID: containerID},\n\t\t}\n\n\t\t// Container with disabled healthcheck but exited\n\t\tapiClient.EXPECT().ContainerInspect(ctx, containerID, gomock.Any()).Return(client.ContainerInspectResult{\n\t\t\tContainer: container.InspectResponse{\n\t\t\t\tID:   containerID,\n\t\t\t\tName: \"test-container\",\n\t\t\t\tState: &container.State{\n\t\t\t\t\tStatus:   \"exited\",\n\t\t\t\t\tExitCode: 1,\n\t\t\t\t},\n\t\t\t\tConfig: &container.Config{\n\t\t\t\t\tHealthcheck: &container.HealthConfig{\n\t\t\t\t\t\tTest: []string{\"NONE\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t}, nil)\n\n\t\t_, err := tested.(*composeService).isServiceHealthy(ctx, containers, true)\n\t\tassert.ErrorContains(t, err, \"exited\")\n\t})\n\n\tt.Run(\"healthy container with healthcheck\", func(t *testing.T) {\n\t\tcontainerID := \"test-container-id\"\n\t\tcontainers := Containers{\n\t\t\t{ID: containerID},\n\t\t}\n\n\t\t// Container with actual healthcheck that is healthy\n\t\tapiClient.EXPECT().ContainerInspect(ctx, containerID, gomock.Any()).Return(client.ContainerInspectResult{\n\t\t\tContainer: container.InspectResponse{\n\t\t\t\tID:   containerID,\n\t\t\t\tName: \"test-container\",\n\t\t\t\tState: &container.State{\n\t\t\t\t\tStatus: \"running\",\n\t\t\t\t\tHealth: &container.Health{\n\t\t\t\t\t\tStatus: container.Healthy,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tConfig: &container.Config{\n\t\t\t\t\tHealthcheck: &container.HealthConfig{\n\t\t\t\t\t\tTest: []string{\"CMD\", \"curl\", \"-f\", \"http://localhost\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t}, nil)\n\n\t\tisHealthy, err := tested.(*composeService).isServiceHealthy(ctx, containers, false)\n\t\tassert.NilError(t, err)\n\t\tassert.Equal(t, true, isHealthy, \"Container with healthy status should be healthy\")\n\t})\n}\n\nfunc TestCreateMobyContainer(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\tapiClient := mocks.NewMockAPIClient(mockCtrl)\n\tcli := mocks.NewMockCli(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\tcli.EXPECT().Client().Return(apiClient).AnyTimes()\n\tcli.EXPECT().ConfigFile().Return(&configfile.ConfigFile{}).AnyTimes()\n\tapiClient.EXPECT().DaemonHost().Return(\"\").AnyTimes()\n\tapiClient.EXPECT().ImageInspect(anyCancellableContext(), gomock.Any()).Return(client.ImageInspectResult{}, nil).AnyTimes()\n\n\t// force `RuntimeVersion` to fetch fresh version\n\truntimeVersion = runtimeVersionCache{}\n\tapiClient.EXPECT().ServerVersion(gomock.Any(), gomock.Any()).Return(client.ServerVersionResult{\n\t\tAPIVersion: \"1.44\",\n\t}, nil).AnyTimes()\n\n\tservice := types.ServiceConfig{\n\t\tName: \"test\",\n\t\tNetworks: map[string]*types.ServiceNetworkConfig{\n\t\t\t\"a\": {\n\t\t\t\tPriority: 10,\n\t\t\t},\n\t\t\t\"b\": {\n\t\t\t\tPriority: 100,\n\t\t\t},\n\t\t},\n\t}\n\tproject := types.Project{\n\t\tName: \"bork\",\n\t\tServices: types.Services{\n\t\t\t\"test\": service,\n\t\t},\n\t\tNetworks: types.Networks{\n\t\t\t\"a\": types.NetworkConfig{\n\t\t\t\tName: \"a-moby-name\",\n\t\t\t},\n\t\t\t\"b\": types.NetworkConfig{\n\t\t\t\tName: \"b-moby-name\",\n\t\t\t},\n\t\t},\n\t}\n\n\tvar got client.ContainerCreateOptions\n\tapiClient.EXPECT().ContainerCreate(gomock.Any(), gomock.Any()).DoAndReturn(func(_ context.Context, opts client.ContainerCreateOptions) (client.ContainerCreateResult, error) {\n\t\tgot = opts\n\t\treturn client.ContainerCreateResult{ID: \"an-id\"}, nil\n\t})\n\n\tapiClient.EXPECT().ContainerInspect(gomock.Any(), gomock.Eq(\"an-id\"), gomock.Any()).Times(1).Return(client.ContainerInspectResult{\n\t\tContainer: container.InspectResponse{\n\t\t\tID:              \"an-id\",\n\t\t\tName:            \"a-name\",\n\t\t\tConfig:          &container.Config{},\n\t\t\tNetworkSettings: &container.NetworkSettings{},\n\t\t},\n\t}, nil)\n\n\t_, err = tested.(*composeService).createMobyContainer(t.Context(), &project, service, \"test\", 0, nil, createOptions{\n\t\tLabels: make(types.Labels),\n\t})\n\tvar falseBool bool\n\twant := client.ContainerCreateOptions{\n\t\tConfig: &container.Config{\n\t\t\tAttachStdout: true,\n\t\t\tAttachStderr: true,\n\t\t\tImage:        \"bork-test\",\n\t\t\tLabels: map[string]string{\n\t\t\t\t\"com.docker.compose.config-hash\": \"8dbce408396f8986266bc5deba0c09cfebac63c95c2238e405c7bee5f1bd84b8\",\n\t\t\t\t\"com.docker.compose.depends_on\":  \"\",\n\t\t\t},\n\t\t},\n\t\tHostConfig: &container.HostConfig{\n\t\t\tPortBindings: network.PortMap{},\n\t\t\tExtraHosts:   []string{},\n\t\t\tTmpfs:        map[string]string{},\n\t\t\tResources: container.Resources{\n\t\t\t\tOomKillDisable: &falseBool,\n\t\t\t},\n\t\t\tNetworkMode: \"b-moby-name\",\n\t\t},\n\t\tNetworkingConfig: &network.NetworkingConfig{\n\t\t\tEndpointsConfig: map[string]*network.EndpointSettings{\n\t\t\t\t\"a-moby-name\": {\n\t\t\t\t\tIPAMConfig: &network.EndpointIPAMConfig{},\n\t\t\t\t\tAliases:    []string{\"bork-test-0\"},\n\t\t\t\t},\n\t\t\t\t\"b-moby-name\": {\n\t\t\t\t\tIPAMConfig: &network.EndpointIPAMConfig{},\n\t\t\t\t\tAliases:    []string{\"bork-test-0\"},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\tName: \"test\",\n\t}\n\tassert.DeepEqual(t, want, got, cmpopts.EquateComparable(netip.Addr{}), cmpopts.EquateEmpty())\n\tassert.NilError(t, err)\n}\n"
  },
  {
    "path": "pkg/compose/convert.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"time\"\n\n\tcompose \"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/moby/moby/api/types/container\"\n)\n\n// ToMobyEnv convert into []string\nfunc ToMobyEnv(environment compose.MappingWithEquals) []string {\n\tvar env []string\n\tfor k, v := range environment {\n\t\tif v == nil {\n\t\t\tenv = append(env, k)\n\t\t} else {\n\t\t\tenv = append(env, fmt.Sprintf(\"%s=%s\", k, *v))\n\t\t}\n\t}\n\treturn env\n}\n\n// ToMobyHealthCheck convert into container.HealthConfig\nfunc (s *composeService) ToMobyHealthCheck(ctx context.Context, check *compose.HealthCheckConfig) (*container.HealthConfig, error) {\n\tif check == nil {\n\t\treturn nil, nil\n\t}\n\tvar (\n\t\tinterval time.Duration\n\t\ttimeout  time.Duration\n\t\tperiod   time.Duration\n\t\tretries  int\n\t)\n\tif check.Interval != nil {\n\t\tinterval = time.Duration(*check.Interval)\n\t}\n\tif check.Timeout != nil {\n\t\ttimeout = time.Duration(*check.Timeout)\n\t}\n\tif check.StartPeriod != nil {\n\t\tperiod = time.Duration(*check.StartPeriod)\n\t}\n\tif check.Retries != nil {\n\t\tretries = int(*check.Retries)\n\t}\n\ttest := check.Test\n\tif check.Disable {\n\t\ttest = []string{\"NONE\"}\n\t}\n\tvar startInterval time.Duration\n\tif check.StartInterval != nil {\n\t\tstartInterval = time.Duration(*check.StartInterval)\n\t\tif check.StartPeriod == nil {\n\t\t\t// see https://github.com/moby/moby/issues/48874\n\t\t\treturn nil, errors.New(\"healthcheck.start_interval requires healthcheck.start_period to be set\")\n\t\t}\n\t}\n\treturn &container.HealthConfig{\n\t\tTest:          test,\n\t\tInterval:      interval,\n\t\tTimeout:       timeout,\n\t\tStartPeriod:   period,\n\t\tStartInterval: startInterval,\n\t\tRetries:       retries,\n\t}, nil\n}\n\n// ToSeconds convert into seconds\nfunc ToSeconds(d *compose.Duration) *int {\n\tif d == nil {\n\t\treturn nil\n\t}\n\ts := int(time.Duration(*d).Seconds())\n\treturn &s\n}\n"
  },
  {
    "path": "pkg/compose/cp.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/moby/go-archive\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\ntype copyDirection int\n\nconst (\n\tfromService copyDirection = 1 << iota\n\ttoService\n\tacrossServices = fromService | toService\n)\n\nfunc (s *composeService) Copy(ctx context.Context, projectName string, options api.CopyOptions) error {\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.copy(ctx, projectName, options)\n\t}, \"copy\", s.events)\n}\n\nfunc (s *composeService) copy(ctx context.Context, projectName string, options api.CopyOptions) error {\n\tprojectName = strings.ToLower(projectName)\n\tsrcService, srcPath := splitCpArg(options.Source)\n\tdestService, dstPath := splitCpArg(options.Destination)\n\n\tvar direction copyDirection\n\tvar serviceName string\n\tvar copyFunc func(ctx context.Context, containerID string, srcPath string, dstPath string, opts api.CopyOptions) error\n\tif srcService != \"\" {\n\t\tdirection |= fromService\n\t\tserviceName = srcService\n\t\tcopyFunc = s.copyFromContainer\n\t}\n\tif destService != \"\" {\n\t\tdirection |= toService\n\t\tserviceName = destService\n\t\tcopyFunc = s.copyToContainer\n\t}\n\tif direction == acrossServices {\n\t\treturn errors.New(\"copying between services is not supported\")\n\t}\n\n\tif direction == 0 {\n\t\treturn errors.New(\"unknown copy direction\")\n\t}\n\n\tcontainers, err := s.listContainersTargetedForCopy(ctx, projectName, options, direction, serviceName)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tg := errgroup.Group{}\n\tfor _, cont := range containers {\n\t\tctr := cont\n\t\tg.Go(func() error {\n\t\t\tname := getCanonicalContainerName(ctr)\n\t\t\tvar msg string\n\t\t\tif direction == fromService {\n\t\t\t\tmsg = fmt.Sprintf(\"%s:%s to %s\", name, srcPath, dstPath)\n\t\t\t} else {\n\t\t\t\tmsg = fmt.Sprintf(\"%s to %s:%s\", srcPath, name, dstPath)\n\t\t\t}\n\t\t\ts.events.On(api.Resource{\n\t\t\t\tID:      name,\n\t\t\t\tText:    api.StatusCopying,\n\t\t\t\tDetails: msg,\n\t\t\t\tStatus:  api.Working,\n\t\t\t})\n\t\t\tif err := copyFunc(ctx, ctr.ID, srcPath, dstPath, options); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\ts.events.On(api.Resource{\n\t\t\t\tID:      name,\n\t\t\t\tText:    api.StatusCopied,\n\t\t\t\tDetails: msg,\n\t\t\t\tStatus:  api.Done,\n\t\t\t})\n\t\t\treturn nil\n\t\t})\n\t}\n\n\treturn g.Wait()\n}\n\nfunc (s *composeService) listContainersTargetedForCopy(ctx context.Context, projectName string, options api.CopyOptions, direction copyDirection, serviceName string) (Containers, error) {\n\tvar containers Containers\n\tvar err error\n\tswitch {\n\tcase options.Index > 0:\n\t\tctr, err := s.getSpecifiedContainer(ctx, projectName, oneOffExclude, true, serviceName, options.Index)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn append(containers, ctr), nil\n\tdefault:\n\t\twithOneOff := oneOffExclude\n\t\tif options.All {\n\t\t\twithOneOff = oneOffInclude\n\t\t}\n\t\tcontainers, err = s.getContainers(ctx, projectName, withOneOff, true, serviceName)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tif len(containers) < 1 {\n\t\t\treturn nil, fmt.Errorf(\"no container found for service %q\", serviceName)\n\t\t}\n\t\tif direction == fromService {\n\t\t\treturn containers[:1], err\n\t\t}\n\t\treturn containers, err\n\t}\n}\n\nfunc (s *composeService) copyToContainer(ctx context.Context, containerID string, srcPath string, dstPath string, opts api.CopyOptions) error {\n\tvar err error\n\tif srcPath != \"-\" {\n\t\t// Get an absolute source path.\n\t\tsrcPath, err = resolveLocalPath(srcPath)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// Prepare destination copy info by stat-ing the container path.\n\tdstInfo := archive.CopyInfo{Path: dstPath}\n\tvar dstStat container.PathStat\n\tres, err := s.apiClient().ContainerStatPath(ctx, containerID, client.ContainerStatPathOptions{\n\t\tPath: dstPath,\n\t})\n\tif err == nil {\n\t\tdstStat = res.Stat\n\t}\n\n\t// If the destination is a symbolic link, we should evaluate it.\n\tif err == nil && dstStat.Mode&os.ModeSymlink != 0 {\n\t\tlinkTarget := dstStat.LinkTarget\n\t\tif !isAbs(linkTarget) {\n\t\t\t// Join with the parent directory.\n\t\t\tdstParent, _ := archive.SplitPathDirEntry(dstPath)\n\t\t\tlinkTarget = filepath.Join(dstParent, linkTarget)\n\t\t}\n\n\t\tdstInfo.Path = linkTarget\n\t\tres, err = s.apiClient().ContainerStatPath(ctx, containerID, client.ContainerStatPathOptions{\n\t\t\tPath: linkTarget,\n\t\t})\n\t\tif err == nil {\n\t\t\tdstStat = res.Stat\n\t\t}\n\t}\n\n\t// Validate the destination path\n\tif err := command.ValidateOutputPathFileMode(dstStat.Mode); err != nil {\n\t\treturn fmt.Errorf(`destination \"%s:%s\" must be a directory or a regular file: %w`, containerID, dstPath, err)\n\t}\n\n\t// Ignore any error and assume that the parent directory of the destination\n\t// path exists, in which case the copy may still succeed. If there is any\n\t// type of conflict (e.g., non-directory overwriting an existing directory\n\t// or vice versa) the extraction will fail. If the destination simply did\n\t// not exist, but the parent directory does, the extraction will still\n\t// succeed.\n\tif err == nil {\n\t\tdstInfo.Exists, dstInfo.IsDir = true, dstStat.Mode.IsDir()\n\t}\n\n\tvar (\n\t\tcontent         io.Reader\n\t\tresolvedDstPath string\n\t)\n\n\tif srcPath == \"-\" {\n\t\tcontent = s.stdin()\n\t\tresolvedDstPath = dstInfo.Path\n\t\tif !dstInfo.IsDir {\n\t\t\treturn fmt.Errorf(\"destination \\\"%s:%s\\\" must be a directory\", containerID, dstPath)\n\t\t}\n\t} else {\n\t\t// Prepare source copy info.\n\t\tsrcInfo, err := archive.CopyInfoSourcePath(srcPath, opts.FollowLink)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tsrcArchive, err := archive.TarResource(srcInfo)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdefer srcArchive.Close() //nolint:errcheck\n\n\t\t// With the stat info about the local source as well as the\n\t\t// destination, we have enough information to know whether we need to\n\t\t// alter the archive that we upload so that when the server extracts\n\t\t// it to the specified directory in the container we get the desired\n\t\t// copy behavior.\n\n\t\t// See comments in the implementation of `archive.PrepareArchiveCopy`\n\t\t// for exactly what goes into deciding how and whether the source\n\t\t// archive needs to be altered for the correct copy behavior when it is\n\t\t// extracted. This function also infers from the source and destination\n\t\t// info which directory to extract to, which may be the parent of the\n\t\t// destination that the user specified.\n\t\t// Don't create the archive if running in Dry Run mode\n\t\tif !s.dryRun {\n\t\t\tdstDir, preparedArchive, err := archive.PrepareArchiveCopy(srcArchive, srcInfo, dstInfo)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tdefer preparedArchive.Close() //nolint:errcheck\n\n\t\t\tresolvedDstPath = dstDir\n\t\t\tcontent = preparedArchive\n\t\t}\n\t}\n\n\t_, err = s.apiClient().CopyToContainer(ctx, containerID, client.CopyToContainerOptions{\n\t\tDestinationPath:           resolvedDstPath,\n\t\tContent:                   content,\n\t\tAllowOverwriteDirWithFile: false,\n\t\tCopyUIDGID:                opts.CopyUIDGID,\n\t})\n\treturn err\n}\n\nfunc (s *composeService) copyFromContainer(ctx context.Context, containerID, srcPath, dstPath string, opts api.CopyOptions) error {\n\tvar err error\n\tif dstPath != \"-\" {\n\t\t// Get an absolute destination path.\n\t\tdstPath, err = resolveLocalPath(dstPath)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif err := command.ValidateOutputPath(dstPath); err != nil {\n\t\treturn err\n\t}\n\n\t// if client requests to follow symbol link, then must decide target file to be copied\n\tvar rebaseName string\n\tif opts.FollowLink {\n\t\tvar srcStat container.PathStat\n\t\tres, err := s.apiClient().ContainerStatPath(ctx, containerID, client.ContainerStatPathOptions{\n\t\t\tPath: srcPath,\n\t\t})\n\t\tif err == nil {\n\t\t\tsrcStat = res.Stat\n\t\t}\n\n\t\t// If the destination is a symbolic link, we should follow it.\n\t\tif err == nil && srcStat.Mode&os.ModeSymlink != 0 {\n\t\t\tlinkTarget := srcStat.LinkTarget\n\t\t\tif !isAbs(linkTarget) {\n\t\t\t\t// Join with the parent directory.\n\t\t\t\tsrcParent, _ := archive.SplitPathDirEntry(srcPath)\n\t\t\t\tlinkTarget = filepath.Join(srcParent, linkTarget)\n\t\t\t}\n\n\t\t\tlinkTarget, rebaseName = archive.GetRebaseName(srcPath, linkTarget)\n\t\t\tsrcPath = linkTarget\n\t\t}\n\t}\n\n\tres, err := s.apiClient().CopyFromContainer(ctx, containerID, client.CopyFromContainerOptions{\n\t\tSourcePath: srcPath,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer res.Content.Close() //nolint:errcheck\n\n\tif dstPath == \"-\" {\n\t\t_, err = io.Copy(s.stdout(), res.Content)\n\t\treturn err\n\t}\n\n\tsrcInfo := archive.CopyInfo{\n\t\tPath:       srcPath,\n\t\tExists:     true,\n\t\tIsDir:      res.Stat.Mode.IsDir(),\n\t\tRebaseName: rebaseName,\n\t}\n\n\tpreArchive := res.Content\n\tif srcInfo.RebaseName != \"\" {\n\t\t_, srcBase := archive.SplitPathDirEntry(srcInfo.Path)\n\t\tpreArchive = archive.RebaseArchiveEntries(res.Content, srcBase, srcInfo.RebaseName)\n\t}\n\n\treturn archive.CopyTo(preArchive, srcInfo, dstPath)\n}\n\n// IsAbs is a platform-agnostic wrapper for filepath.IsAbs.\n//\n// On Windows, golang filepath.IsAbs does not consider a path \\windows\\system32\n// as absolute as it doesn't start with a drive-letter/colon combination. However,\n// in docker we need to verify things such as WORKDIR /windows/system32 in\n// a Dockerfile (which gets translated to \\windows\\system32 when being processed\n// by the daemon). This SHOULD be treated as absolute from a docker processing\n// perspective.\nfunc isAbs(path string) bool {\n\treturn filepath.IsAbs(path) || strings.HasPrefix(path, string(os.PathSeparator))\n}\n\nfunc splitCpArg(arg string) (ctr, path string) {\n\tif isAbs(arg) {\n\t\t// Explicit local absolute path, e.g., `C:\\foo` or `/foo`.\n\t\treturn \"\", arg\n\t}\n\n\tctr, path, ok := strings.Cut(arg, \":\")\n\n\tif !ok || strings.HasPrefix(ctr, \".\") {\n\t\t// Either there's no `:` in the arg\n\t\t// OR it's an explicit local relative path like `./file:name.txt`.\n\t\treturn \"\", arg\n\t}\n\n\treturn ctr, path\n}\n\nfunc resolveLocalPath(localPath string) (absPath string, err error) {\n\tif absPath, err = filepath.Abs(localPath); err != nil {\n\t\treturn absPath, err\n\t}\n\treturn archive.PreserveTrailingDotOrSeparator(absPath, localPath), nil\n}\n"
  },
  {
    "path": "pkg/compose/create.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/netip\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/paths\"\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/errdefs\"\n\t\"github.com/moby/moby/api/types/blkiodev\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/mount\"\n\t\"github.com/moby/moby/api/types/network\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/moby/moby/client/pkg/versions\"\n\t\"github.com/sirupsen/logrus\"\n\tcdi \"tags.cncf.io/container-device-interface/pkg/parser\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\ntype createOptions struct {\n\tAutoRemove        bool\n\tAttachStdin       bool\n\tUseNetworkAliases bool\n\tLabels            types.Labels\n}\n\ntype createConfigs struct {\n\tContainer *container.Config\n\tHost      *container.HostConfig\n\tNetwork   *network.NetworkingConfig\n\tLinks     []string\n}\n\nfunc (s *composeService) Create(ctx context.Context, project *types.Project, createOpts api.CreateOptions) error {\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.create(ctx, project, createOpts)\n\t}, \"create\", s.events)\n}\n\nfunc (s *composeService) create(ctx context.Context, project *types.Project, options api.CreateOptions) error {\n\tif len(options.Services) == 0 {\n\t\toptions.Services = project.ServiceNames()\n\t}\n\n\terr := project.CheckContainerNameUnicity()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = s.ensureImagesExists(ctx, project, options.Build, options.QuietPull)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = s.ensureModels(ctx, project, options.QuietPull)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tprepareNetworks(project)\n\n\tnetworks, err := s.ensureNetworks(ctx, project)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvolumes, err := s.ensureProjectVolumes(ctx, project)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar observedState Containers\n\tobservedState, err = s.getContainers(ctx, project.Name, oneOffInclude, true)\n\tif err != nil {\n\t\treturn err\n\t}\n\torphans := observedState.filter(isOrphaned(project))\n\tif len(orphans) > 0 && !options.IgnoreOrphans {\n\t\tif options.RemoveOrphans {\n\t\t\terr := s.removeContainers(ctx, orphans, nil, nil, false)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else {\n\t\t\tlogrus.Warnf(\"Found orphan containers (%s) for this project. If \"+\n\t\t\t\t\"you removed or renamed this service in your compose \"+\n\t\t\t\t\"file, you can run this command with the \"+\n\t\t\t\t\"--remove-orphans flag to clean it up.\", orphans.names())\n\t\t}\n\t}\n\n\t// Temporary implementation of use_api_socket until we get actual support inside docker engine\n\tproject, err = s.useAPISocket(project)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn newConvergence(options.Services, observedState, networks, volumes, s).apply(ctx, project, options)\n}\n\nfunc prepareNetworks(project *types.Project) {\n\tfor k, nw := range project.Networks {\n\t\tnw.CustomLabels = nw.CustomLabels.\n\t\t\tAdd(api.NetworkLabel, k).\n\t\t\tAdd(api.ProjectLabel, project.Name).\n\t\t\tAdd(api.VersionLabel, api.ComposeVersion)\n\t\tproject.Networks[k] = nw\n\t}\n}\n\nfunc (s *composeService) ensureNetworks(ctx context.Context, project *types.Project) (map[string]string, error) {\n\tnetworks := map[string]string{}\n\tfor name, nw := range project.Networks {\n\t\tid, err := s.ensureNetwork(ctx, project, name, &nw)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tnetworks[name] = id\n\t\tproject.Networks[name] = nw\n\t}\n\treturn networks, nil\n}\n\nfunc (s *composeService) ensureProjectVolumes(ctx context.Context, project *types.Project) (map[string]string, error) {\n\tids := map[string]string{}\n\tfor k, volume := range project.Volumes {\n\t\tvolume.CustomLabels = volume.CustomLabels.Add(api.VolumeLabel, k)\n\t\tvolume.CustomLabels = volume.CustomLabels.Add(api.ProjectLabel, project.Name)\n\t\tvolume.CustomLabels = volume.CustomLabels.Add(api.VersionLabel, api.ComposeVersion)\n\t\tid, err := s.ensureVolume(ctx, k, volume, project)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tids[k] = id\n\t}\n\n\treturn ids, nil\n}\n\n//nolint:gocyclo\nfunc (s *composeService) getCreateConfigs(ctx context.Context,\n\tp *types.Project,\n\tservice types.ServiceConfig,\n\tnumber int,\n\tinherit *container.Summary,\n\topts createOptions,\n) (createConfigs, error) {\n\tlabels, err := s.prepareLabels(opts.Labels, service, number)\n\tif err != nil {\n\t\treturn createConfigs{}, err\n\t}\n\n\tvar runCmd, entrypoint []string\n\tif service.Command != nil {\n\t\trunCmd = service.Command\n\t}\n\tif service.Entrypoint != nil {\n\t\tentrypoint = service.Entrypoint\n\t}\n\n\tvar (\n\t\ttty       = service.Tty\n\t\tstdinOpen = service.StdinOpen\n\t)\n\n\tproxyConfig := types.MappingWithEquals(s.configFile().ParseProxyConfig(s.apiClient().DaemonHost(), nil))\n\tenv := proxyConfig.OverrideBy(service.Environment)\n\n\tvar mainNwName string\n\tvar mainNw *types.ServiceNetworkConfig\n\tif len(service.Networks) > 0 {\n\t\tmainNwName = service.NetworksByPriority()[0]\n\t\tmainNw = service.Networks[mainNwName]\n\t}\n\n\tif err := s.prepareContainerMACAddress(service, mainNw, mainNwName); err != nil {\n\t\treturn createConfigs{}, err\n\t}\n\n\thealthcheck, err := s.ToMobyHealthCheck(ctx, service.HealthCheck)\n\tif err != nil {\n\t\treturn createConfigs{}, err\n\t}\n\n\texposedPorts, err := buildContainerPorts(service)\n\tif err != nil {\n\t\treturn createConfigs{}, err\n\t}\n\n\tcontainerConfig := container.Config{\n\t\tHostname:        service.Hostname,\n\t\tDomainname:      service.DomainName,\n\t\tUser:            service.User,\n\t\tExposedPorts:    exposedPorts,\n\t\tTty:             tty,\n\t\tOpenStdin:       stdinOpen,\n\t\tStdinOnce:       opts.AttachStdin && stdinOpen,\n\t\tAttachStdin:     opts.AttachStdin,\n\t\tAttachStderr:    true,\n\t\tAttachStdout:    true,\n\t\tCmd:             runCmd,\n\t\tImage:           api.GetImageNameOrDefault(service, p.Name),\n\t\tWorkingDir:      service.WorkingDir,\n\t\tEntrypoint:      entrypoint,\n\t\tNetworkDisabled: service.NetworkMode == \"disabled\",\n\t\tLabels:          labels,\n\t\tStopSignal:      service.StopSignal,\n\t\tEnv:             ToMobyEnv(env),\n\t\tHealthcheck:     healthcheck,\n\t\tStopTimeout:     ToSeconds(service.StopGracePeriod),\n\t} // VOLUMES/MOUNTS/FILESYSTEMS\n\ttmpfs := map[string]string{}\n\tfor _, t := range service.Tmpfs {\n\t\tk, v, _ := strings.Cut(t, \":\")\n\t\ttmpfs[k] = v\n\t}\n\tbinds, mounts, err := s.buildContainerVolumes(ctx, *p, service, inherit)\n\tif err != nil {\n\t\treturn createConfigs{}, err\n\t}\n\n\t// NETWORKING\n\tlinks, err := s.getLinks(ctx, p.Name, service, number)\n\tif err != nil {\n\t\treturn createConfigs{}, err\n\t}\n\tapiVersion, err := s.RuntimeVersion(ctx)\n\tif err != nil {\n\t\treturn createConfigs{}, err\n\t}\n\tnetworkMode, networkingConfig, err := defaultNetworkSettings(p, service, number, links, opts.UseNetworkAliases, apiVersion)\n\tif err != nil {\n\t\treturn createConfigs{}, err\n\t}\n\tportBindings, err := buildContainerPortBindingOptions(service)\n\tif err != nil {\n\t\treturn createConfigs{}, err\n\t}\n\n\t// MISC\n\tresources := getDeployResources(service)\n\tvar logConfig container.LogConfig\n\tif service.Logging != nil {\n\t\tlogConfig = container.LogConfig{\n\t\t\tType:   service.Logging.Driver,\n\t\t\tConfig: service.Logging.Options,\n\t\t}\n\t}\n\tsecurityOpts, unconfined, err := parseSecurityOpts(p, service.SecurityOpt)\n\tif err != nil {\n\t\treturn createConfigs{}, err\n\t}\n\n\tvar dnsIPs []netip.Addr\n\tfor _, d := range service.DNS {\n\t\tdnsIP, err := netip.ParseAddr(d)\n\t\tif err != nil {\n\t\t\treturn createConfigs{}, fmt.Errorf(\"invalid DNS address: %w\", err)\n\t\t}\n\t\tdnsIPs = append(dnsIPs, dnsIP)\n\t}\n\n\thostConfig := container.HostConfig{\n\t\tAutoRemove:     opts.AutoRemove,\n\t\tAnnotations:    service.Annotations,\n\t\tBinds:          binds,\n\t\tMounts:         mounts,\n\t\tCapAdd:         service.CapAdd,\n\t\tCapDrop:        service.CapDrop,\n\t\tNetworkMode:    networkMode,\n\t\tInit:           service.Init,\n\t\tIpcMode:        container.IpcMode(service.Ipc),\n\t\tCgroupnsMode:   container.CgroupnsMode(service.Cgroup),\n\t\tReadonlyRootfs: service.ReadOnly,\n\t\tRestartPolicy:  getRestartPolicy(service),\n\t\tShmSize:        int64(service.ShmSize),\n\t\tSysctls:        service.Sysctls,\n\t\tPortBindings:   portBindings,\n\t\tResources:      resources,\n\t\tVolumeDriver:   service.VolumeDriver,\n\t\tVolumesFrom:    service.VolumesFrom,\n\t\tDNS:            dnsIPs,\n\t\tDNSSearch:      service.DNSSearch,\n\t\tDNSOptions:     service.DNSOpts,\n\t\tExtraHosts:     service.ExtraHosts.AsList(\":\"),\n\t\tSecurityOpt:    securityOpts,\n\t\tStorageOpt:     service.StorageOpt,\n\t\tUsernsMode:     container.UsernsMode(service.UserNSMode),\n\t\tUTSMode:        container.UTSMode(service.Uts),\n\t\tPrivileged:     service.Privileged,\n\t\tPidMode:        container.PidMode(service.Pid),\n\t\tTmpfs:          tmpfs,\n\t\tIsolation:      container.Isolation(service.Isolation),\n\t\tRuntime:        service.Runtime,\n\t\tLogConfig:      logConfig,\n\t\tGroupAdd:       service.GroupAdd,\n\t\tLinks:          links,\n\t\tOomScoreAdj:    int(service.OomScoreAdj),\n\t}\n\n\tif unconfined {\n\t\thostConfig.MaskedPaths = []string{}\n\t\thostConfig.ReadonlyPaths = []string{}\n\t}\n\n\tcfgs := createConfigs{\n\t\tContainer: &containerConfig,\n\t\tHost:      &hostConfig,\n\t\tNetwork:   networkingConfig,\n\t\tLinks:     links,\n\t}\n\treturn cfgs, nil\n}\n\n// prepareContainerMACAddress handles the service-level mac_address field and the newer mac_address field added to service\n// network config. This newer field is only compatible with the Engine API v1.44 (and onwards), and this API version\n// also deprecates the container-wide mac_address field. Thus, this method will validate service config and mutate the\n// passed mainNw to provide backward-compatibility whenever possible.\n//\n// It returns the container-wide MAC address, but this value will be kept empty for newer API versions.\nfunc (s *composeService) prepareContainerMACAddress(service types.ServiceConfig, mainNw *types.ServiceNetworkConfig, nwName string) error {\n\t// Engine API 1.44 added support for endpoint-specific MAC address and now returns a warning when a MAC address is\n\t// set in container.Config. Thus, we have to jump through a number of hoops:\n\t//\n\t// 1. Top-level mac_address and main endpoint's MAC address should be the same ;\n\t// 2. If supported by the API, top-level mac_address should be migrated to the main endpoint and container.Config\n\t//    should be kept empty ;\n\t// 3. Otherwise, the endpoint mac_address should be set in container.Config and no other endpoint-specific\n\t//    mac_address can be specified. If that's the case, use top-level mac_address ;\n\t//\n\t// After that, if an endpoint mac_address is set, it's either user-defined or migrated by the code below, so\n\t// there's no need to check for API version in defaultNetworkSettings.\n\tmacAddress := service.MacAddress\n\tif macAddress != \"\" && mainNw != nil && mainNw.MacAddress != \"\" && mainNw.MacAddress != macAddress {\n\t\treturn fmt.Errorf(\"the service-level mac_address should have the same value as network %s\", nwName)\n\t}\n\tif mainNw != nil && mainNw.MacAddress == \"\" {\n\t\tmainNw.MacAddress = macAddress\n\t}\n\treturn nil\n}\n\nfunc getAliases(project *types.Project, service types.ServiceConfig, serviceIndex int, cfg *types.ServiceNetworkConfig, useNetworkAliases bool) []string {\n\taliases := []string{getContainerName(project.Name, service, serviceIndex)}\n\tif useNetworkAliases {\n\t\taliases = append(aliases, service.Name)\n\t\tif cfg != nil {\n\t\t\taliases = append(aliases, cfg.Aliases...)\n\t\t}\n\t}\n\treturn aliases\n}\n\nfunc createEndpointSettings(p *types.Project, service types.ServiceConfig, serviceIndex int, networkKey string, links []string, useNetworkAliases bool) (*network.EndpointSettings, error) {\n\tconst ifname = \"com.docker.network.endpoint.ifname\"\n\n\tconfig := service.Networks[networkKey]\n\tvar ipam *network.EndpointIPAMConfig\n\tvar (\n\t\tipv4Address netip.Addr\n\t\tipv6Address netip.Addr\n\t\tmacAddress  string\n\t\tdriverOpts  types.Options\n\t\tgwPriority  int\n\t)\n\tif config != nil {\n\t\tvar err error\n\t\tif config.Ipv4Address != \"\" {\n\t\t\tipv4Address, err = netip.ParseAddr(config.Ipv4Address)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"invalid IPv4 address: %w\", err)\n\t\t\t}\n\t\t}\n\t\tif config.Ipv6Address != \"\" {\n\t\t\tipv6Address, err = netip.ParseAddr(config.Ipv6Address)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"invalid IPv6 address: %w\", err)\n\t\t\t}\n\t\t}\n\t\tvar linkLocalIPs []netip.Addr\n\t\tfor _, link := range config.LinkLocalIPs {\n\t\t\tif link == \"\" {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tllIP, err := netip.ParseAddr(link)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"invalid link-local IP: %w\", err)\n\t\t\t}\n\t\t\tlinkLocalIPs = append(linkLocalIPs, llIP)\n\t\t}\n\n\t\tipam = &network.EndpointIPAMConfig{\n\t\t\tIPv4Address:  ipv4Address.Unmap(),\n\t\t\tIPv6Address:  ipv6Address,\n\t\t\tLinkLocalIPs: linkLocalIPs,\n\t\t}\n\t\tmacAddress = config.MacAddress\n\t\tdriverOpts = config.DriverOpts\n\t\tif config.InterfaceName != \"\" {\n\t\t\tif driverOpts == nil {\n\t\t\t\tdriverOpts = map[string]string{}\n\t\t\t}\n\t\t\tif name, ok := driverOpts[ifname]; ok && name != config.InterfaceName {\n\t\t\t\tlogrus.Warnf(\"ignoring services.%s.networks.%s.interface_name as %s driver_opts is already declared\", service.Name, networkKey, ifname)\n\t\t\t}\n\t\t\tdriverOpts[ifname] = config.InterfaceName\n\t\t}\n\t\tgwPriority = config.GatewayPriority\n\t}\n\tvar ma network.HardwareAddr\n\tif macAddress != \"\" {\n\t\tvar err error\n\t\tma, err = parseMACAddr(macAddress)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\treturn &network.EndpointSettings{\n\t\tAliases:     getAliases(p, service, serviceIndex, config, useNetworkAliases),\n\t\tLinks:       links,\n\t\tIPAddress:   ipv4Address,\n\t\tIPv6Gateway: ipv6Address,\n\t\tIPAMConfig:  ipam,\n\t\tMacAddress:  ma,\n\t\tDriverOpts:  driverOpts,\n\t\tGwPriority:  gwPriority,\n\t}, nil\n}\n\n// copy/pasted from https://github.com/docker/cli/blob/9de1b162f/cli/command/container/opts.go#L673-L697 + RelativePath\n// TODO find so way to share this code with docker/cli\nfunc parseSecurityOpts(p *types.Project, securityOpts []string) ([]string, bool, error) {\n\tvar (\n\t\tunconfined bool\n\t\tparsed     []string\n\t)\n\tfor _, opt := range securityOpts {\n\t\tif opt == \"systempaths=unconfined\" {\n\t\t\tunconfined = true\n\t\t\tcontinue\n\t\t}\n\t\tcon := strings.SplitN(opt, \"=\", 2)\n\t\tif len(con) == 1 && con[0] != \"no-new-privileges\" {\n\t\t\tif strings.Contains(opt, \":\") {\n\t\t\t\tcon = strings.SplitN(opt, \":\", 2)\n\t\t\t} else {\n\t\t\t\treturn securityOpts, false, fmt.Errorf(\"invalid security-opt: %q\", opt)\n\t\t\t}\n\t\t}\n\t\tif con[0] == \"seccomp\" && con[1] != \"unconfined\" && con[1] != \"builtin\" {\n\t\t\tf, err := os.ReadFile(p.RelativePath(con[1]))\n\t\t\tif err != nil {\n\t\t\t\treturn securityOpts, false, fmt.Errorf(\"opening seccomp profile (%s) failed: %w\", con[1], err)\n\t\t\t}\n\t\t\tb := bytes.NewBuffer(nil)\n\t\t\tif err := json.Compact(b, f); err != nil {\n\t\t\t\treturn securityOpts, false, fmt.Errorf(\"compacting json for seccomp profile (%s) failed: %w\", con[1], err)\n\t\t\t}\n\t\t\tparsed = append(parsed, fmt.Sprintf(\"seccomp=%s\", b.Bytes()))\n\t\t} else {\n\t\t\tparsed = append(parsed, opt)\n\t\t}\n\t}\n\n\treturn parsed, unconfined, nil\n}\n\nfunc (s *composeService) prepareLabels(labels types.Labels, service types.ServiceConfig, number int) (map[string]string, error) {\n\thash, err := ServiceHash(service)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tlabels[api.ConfigHashLabel] = hash\n\n\tif number > 0 {\n\t\t// One-off containers are not indexed\n\t\tlabels[api.ContainerNumberLabel] = strconv.Itoa(number)\n\t}\n\n\tvar dependencies []string\n\tfor s, d := range service.DependsOn {\n\t\tdependencies = append(dependencies, fmt.Sprintf(\"%s:%s:%t\", s, d.Condition, d.Restart))\n\t}\n\tlabels[api.DependenciesLabel] = strings.Join(dependencies, \",\")\n\treturn labels, nil\n}\n\n// defaultNetworkSettings determines the container.NetworkMode and corresponding network.NetworkingConfig (nil if not applicable).\nfunc defaultNetworkSettings(project *types.Project,\n\tservice types.ServiceConfig, serviceIndex int,\n\tlinks []string, useNetworkAliases bool,\n\tversion string,\n) (container.NetworkMode, *network.NetworkingConfig, error) {\n\tif service.NetworkMode != \"\" {\n\t\treturn container.NetworkMode(service.NetworkMode), nil, nil\n\t}\n\n\tif len(project.Networks) == 0 {\n\t\treturn network.NetworkNone, nil, nil\n\t}\n\n\tif versions.LessThan(version, apiVersion149) {\n\t\tfor _, config := range service.Networks {\n\t\t\tif config != nil && config.InterfaceName != \"\" {\n\t\t\t\treturn \"\", nil, fmt.Errorf(\"interface_name requires Docker Engine %s or later\", DockerEngineV28_1)\n\t\t\t}\n\t\t}\n\t}\n\n\tserviceNetworks := service.NetworksByPriority()\n\tprimaryNetworkKey := \"default\"\n\tif len(serviceNetworks) > 0 {\n\t\tprimaryNetworkKey = serviceNetworks[0]\n\t\tserviceNetworks = serviceNetworks[1:]\n\t}\n\n\tprimaryNetworkEndpoint, err := createEndpointSettings(project, service, serviceIndex, primaryNetworkKey, links, useNetworkAliases)\n\tif err != nil {\n\t\treturn \"\", nil, err\n\t}\n\tif primaryNetworkEndpoint.MacAddress.String() == \"\" {\n\t\tprimaryNetworkEndpoint.MacAddress, err = parseMACAddr(service.MacAddress)\n\t\tif err != nil {\n\t\t\treturn \"\", nil, err\n\t\t}\n\t}\n\n\tprimaryNetworkMobyNetworkName := project.Networks[primaryNetworkKey].Name\n\tendpointsConfig := map[string]*network.EndpointSettings{\n\t\tprimaryNetworkMobyNetworkName: primaryNetworkEndpoint,\n\t}\n\n\t// Starting from API version 1.44, the Engine will take several EndpointsConfigs\n\t// so we can pass all the extra networks we want the container to be connected to\n\t// in the network configuration instead of connecting the container to each extra\n\t// network individually after creation.\n\tfor _, networkKey := range serviceNetworks {\n\t\tepSettings, err := createEndpointSettings(project, service, serviceIndex, networkKey, links, useNetworkAliases)\n\t\tif err != nil {\n\t\t\treturn \"\", nil, err\n\t\t}\n\t\tmobyNetworkName := project.Networks[networkKey].Name\n\t\tendpointsConfig[mobyNetworkName] = epSettings\n\t}\n\n\tnetworkConfig := &network.NetworkingConfig{\n\t\tEndpointsConfig: endpointsConfig,\n\t}\n\n\t// From the Engine API docs:\n\t// > Supported standard values are: bridge, host, none, and container:<name|id>.\n\t// > Any other value is taken as a custom network's name to which this container should connect to.\n\treturn container.NetworkMode(primaryNetworkMobyNetworkName), networkConfig, nil\n}\n\nfunc getRestartPolicy(service types.ServiceConfig) container.RestartPolicy {\n\tvar restart container.RestartPolicy\n\tif service.Restart != \"\" {\n\t\tname, num, ok := strings.Cut(service.Restart, \":\")\n\t\tvar attempts int\n\t\tif ok {\n\t\t\tattempts, _ = strconv.Atoi(num)\n\t\t}\n\t\trestart = container.RestartPolicy{\n\t\t\tName:              mapRestartPolicyCondition(name),\n\t\t\tMaximumRetryCount: attempts,\n\t\t}\n\t}\n\tif service.Deploy != nil && service.Deploy.RestartPolicy != nil {\n\t\tpolicy := *service.Deploy.RestartPolicy\n\t\tvar attempts int\n\t\tif policy.MaxAttempts != nil {\n\t\t\tattempts = int(*policy.MaxAttempts)\n\t\t}\n\t\trestart = container.RestartPolicy{\n\t\t\tName:              mapRestartPolicyCondition(policy.Condition),\n\t\t\tMaximumRetryCount: attempts,\n\t\t}\n\t}\n\treturn restart\n}\n\nfunc mapRestartPolicyCondition(condition string) container.RestartPolicyMode {\n\t// map definitions of deploy.restart_policy to engine definitions\n\tswitch condition {\n\tcase \"none\", \"no\":\n\t\treturn container.RestartPolicyDisabled\n\tcase \"on-failure\":\n\t\treturn container.RestartPolicyOnFailure\n\tcase \"unless-stopped\":\n\t\treturn container.RestartPolicyUnlessStopped\n\tcase \"any\", \"always\":\n\t\treturn container.RestartPolicyAlways\n\tdefault:\n\t\treturn container.RestartPolicyMode(condition)\n\t}\n}\n\nfunc getDeployResources(s types.ServiceConfig) container.Resources {\n\tvar swappiness *int64\n\tif s.MemSwappiness != 0 {\n\t\tval := int64(s.MemSwappiness)\n\t\tswappiness = &val\n\t}\n\tresources := container.Resources{\n\t\tCgroupParent:       s.CgroupParent,\n\t\tMemory:             int64(s.MemLimit),\n\t\tMemorySwap:         int64(s.MemSwapLimit),\n\t\tMemorySwappiness:   swappiness,\n\t\tMemoryReservation:  int64(s.MemReservation),\n\t\tOomKillDisable:     &s.OomKillDisable,\n\t\tCPUCount:           s.CPUCount,\n\t\tCPUPeriod:          s.CPUPeriod,\n\t\tCPUQuota:           s.CPUQuota,\n\t\tCPURealtimePeriod:  s.CPURTPeriod,\n\t\tCPURealtimeRuntime: s.CPURTRuntime,\n\t\tCPUShares:          s.CPUShares,\n\t\tNanoCPUs:           int64(s.CPUS * 1e9),\n\t\tCPUPercent:         int64(s.CPUPercent * 100),\n\t\tCpusetCpus:         s.CPUSet,\n\t\tDeviceCgroupRules:  s.DeviceCgroupRules,\n\t}\n\n\tif s.PidsLimit != 0 {\n\t\tresources.PidsLimit = &s.PidsLimit\n\t}\n\n\tsetBlkio(s.BlkioConfig, &resources)\n\n\tif s.Deploy != nil {\n\t\tsetLimits(s.Deploy.Resources.Limits, &resources)\n\t\tsetReservations(s.Deploy.Resources.Reservations, &resources)\n\t}\n\n\tvar cdiDeviceNames []string\n\tfor _, device := range s.Devices {\n\n\t\tif device.Source == device.Target && cdi.IsQualifiedName(device.Source) {\n\t\t\tcdiDeviceNames = append(cdiDeviceNames, device.Source)\n\t\t\tcontinue\n\t\t}\n\n\t\tresources.Devices = append(resources.Devices, container.DeviceMapping{\n\t\t\tPathOnHost:        device.Source,\n\t\t\tPathInContainer:   device.Target,\n\t\t\tCgroupPermissions: device.Permissions,\n\t\t})\n\t}\n\n\tif len(cdiDeviceNames) > 0 {\n\t\tresources.DeviceRequests = append(resources.DeviceRequests, container.DeviceRequest{\n\t\t\tDriver:    \"cdi\",\n\t\t\tDeviceIDs: cdiDeviceNames,\n\t\t})\n\t}\n\n\tfor _, gpus := range s.Gpus {\n\t\tresources.DeviceRequests = append(resources.DeviceRequests, container.DeviceRequest{\n\t\t\tDriver:       gpus.Driver,\n\t\t\tCount:        int(gpus.Count),\n\t\t\tDeviceIDs:    gpus.IDs,\n\t\t\tCapabilities: [][]string{append(gpus.Capabilities, \"gpu\")},\n\t\t\tOptions:      gpus.Options,\n\t\t})\n\t}\n\n\tulimits := toUlimits(s.Ulimits)\n\tresources.Ulimits = ulimits\n\treturn resources\n}\n\nfunc toUlimits(m map[string]*types.UlimitsConfig) []*container.Ulimit {\n\tvar ulimits []*container.Ulimit\n\tfor name, u := range m {\n\t\tsoft := u.Single\n\t\tif u.Soft != 0 {\n\t\t\tsoft = u.Soft\n\t\t}\n\t\thard := u.Single\n\t\tif u.Hard != 0 {\n\t\t\thard = u.Hard\n\t\t}\n\t\tulimits = append(ulimits, &container.Ulimit{\n\t\t\tName: name,\n\t\t\tHard: int64(hard),\n\t\t\tSoft: int64(soft),\n\t\t})\n\t}\n\treturn ulimits\n}\n\nfunc setReservations(reservations *types.Resource, resources *container.Resources) {\n\tif reservations == nil {\n\t\treturn\n\t}\n\t// Cpu reservation is a swarm option and PIDs is only a limit\n\t// So we only need to map memory reservation and devices\n\tif reservations.MemoryBytes != 0 {\n\t\tresources.MemoryReservation = int64(reservations.MemoryBytes)\n\t}\n\n\tfor _, device := range reservations.Devices {\n\t\tresources.DeviceRequests = append(resources.DeviceRequests, container.DeviceRequest{\n\t\t\tCapabilities: [][]string{device.Capabilities},\n\t\t\tCount:        int(device.Count),\n\t\t\tDeviceIDs:    device.IDs,\n\t\t\tDriver:       device.Driver,\n\t\t\tOptions:      device.Options,\n\t\t})\n\t}\n}\n\nfunc setLimits(limits *types.Resource, resources *container.Resources) {\n\tif limits == nil {\n\t\treturn\n\t}\n\tif limits.MemoryBytes != 0 {\n\t\tresources.Memory = int64(limits.MemoryBytes)\n\t}\n\tif limits.NanoCPUs != 0 {\n\t\tresources.NanoCPUs = int64(limits.NanoCPUs * 1e9)\n\t}\n\tif limits.Pids > 0 {\n\t\tresources.PidsLimit = &limits.Pids\n\t}\n}\n\nfunc setBlkio(blkio *types.BlkioConfig, resources *container.Resources) {\n\tif blkio == nil {\n\t\treturn\n\t}\n\tresources.BlkioWeight = blkio.Weight\n\tfor _, b := range blkio.WeightDevice {\n\t\tresources.BlkioWeightDevice = append(resources.BlkioWeightDevice, &blkiodev.WeightDevice{\n\t\t\tPath:   b.Path,\n\t\t\tWeight: b.Weight,\n\t\t})\n\t}\n\tfor _, b := range blkio.DeviceReadBps {\n\t\tresources.BlkioDeviceReadBps = append(resources.BlkioDeviceReadBps, &blkiodev.ThrottleDevice{\n\t\t\tPath: b.Path,\n\t\t\tRate: uint64(b.Rate),\n\t\t})\n\t}\n\tfor _, b := range blkio.DeviceReadIOps {\n\t\tresources.BlkioDeviceReadIOps = append(resources.BlkioDeviceReadIOps, &blkiodev.ThrottleDevice{\n\t\t\tPath: b.Path,\n\t\t\tRate: uint64(b.Rate),\n\t\t})\n\t}\n\tfor _, b := range blkio.DeviceWriteBps {\n\t\tresources.BlkioDeviceWriteBps = append(resources.BlkioDeviceWriteBps, &blkiodev.ThrottleDevice{\n\t\t\tPath: b.Path,\n\t\t\tRate: uint64(b.Rate),\n\t\t})\n\t}\n\tfor _, b := range blkio.DeviceWriteIOps {\n\t\tresources.BlkioDeviceWriteIOps = append(resources.BlkioDeviceWriteIOps, &blkiodev.ThrottleDevice{\n\t\t\tPath: b.Path,\n\t\t\tRate: uint64(b.Rate),\n\t\t})\n\t}\n}\n\nfunc buildContainerPorts(s types.ServiceConfig) (network.PortSet, error) {\n\t// Add published ports as exposed ports.\n\texposedPorts := network.PortSet{}\n\tfor _, p := range s.Ports {\n\t\tnp, err := network.ParsePort(fmt.Sprintf(\"%d/%s\", p.Target, p.Protocol))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\texposedPorts[np] = struct{}{}\n\t}\n\n\t// Merge in exposed ports to the map of published ports\n\tfor _, e := range s.Expose {\n\t\t// support two formats for expose, original format <portnum>/[<proto>]\n\t\t// or <startport-endport>/[<proto>]\n\t\tpr, err := network.ParsePortRange(e)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\t// parse the start and end port and create a sequence of ports to expose\n\t\t// if expose a port, the start and end port are the same\n\t\tfor p := range pr.All() {\n\t\t\texposedPorts[p] = struct{}{}\n\t\t}\n\t}\n\treturn exposedPorts, nil\n}\n\nfunc buildContainerPortBindingOptions(s types.ServiceConfig) (network.PortMap, error) {\n\tbindings := network.PortMap{}\n\tfor _, port := range s.Ports {\n\t\tvar err error\n\t\tp, err := network.ParsePort(fmt.Sprintf(\"%d/%s\", port.Target, port.Protocol))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tvar hostIP netip.Addr\n\t\tif port.HostIP != \"\" {\n\t\t\thostIP, err = netip.ParseAddr(port.HostIP)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t\tbindings[p] = append(bindings[p], network.PortBinding{\n\t\t\tHostIP:   hostIP,\n\t\t\tHostPort: port.Published,\n\t\t})\n\t}\n\treturn bindings, nil\n}\n\nfunc getDependentServiceFromMode(mode string) string {\n\tif strings.HasPrefix(\n\t\tmode,\n\t\ttypes.NetworkModeServicePrefix,\n\t) {\n\t\treturn mode[len(types.NetworkModeServicePrefix):]\n\t}\n\treturn \"\"\n}\n\nfunc (s *composeService) buildContainerVolumes(\n\tctx context.Context,\n\tp types.Project,\n\tservice types.ServiceConfig,\n\tinherit *container.Summary,\n) ([]string, []mount.Mount, error) {\n\tvar mounts []mount.Mount\n\tvar binds []string\n\n\tmountOptions, err := s.buildContainerMountOptions(ctx, p, service, inherit)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tfor _, m := range mountOptions {\n\t\tswitch m.Type {\n\t\tcase mount.TypeBind:\n\t\t\t// `Mount` is preferred but does not offer option to created host path if missing\n\t\t\t// so `Bind` API is used here with raw volume string\n\t\t\t// see https://github.com/moby/moby/issues/43483\n\t\t\tv := findVolumeByTarget(service.Volumes, m.Target)\n\t\t\tif v != nil {\n\t\t\t\tif v.Type != types.VolumeTypeBind {\n\t\t\t\t\tv.Source = m.Source\n\t\t\t\t}\n\t\t\t\tif !bindRequiresMountAPI(v.Bind) {\n\t\t\t\t\tsource := m.Source\n\t\t\t\t\tif vol := findVolumeByName(p.Volumes, m.Source); vol != nil {\n\t\t\t\t\t\tsource = m.Source\n\t\t\t\t\t}\n\t\t\t\t\tbinds = append(binds, toBindString(source, v))\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t}\n\t\tcase mount.TypeVolume:\n\t\t\tv := findVolumeByTarget(service.Volumes, m.Target)\n\t\t\tvol := findVolumeByName(p.Volumes, m.Source)\n\t\t\tif v != nil && vol != nil {\n\t\t\t\t// Prefer the bind API if no advanced option is used, to preserve backward compatibility\n\t\t\t\tif !volumeRequiresMountAPI(v.Volume) {\n\t\t\t\t\tbinds = append(binds, toBindString(vol.Name, v))\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t}\n\t\tcase mount.TypeImage:\n\t\t\tversion, err := s.RuntimeVersion(ctx)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, nil, err\n\t\t\t}\n\t\t\tif versions.LessThan(version, apiVersion148) {\n\t\t\t\treturn nil, nil, fmt.Errorf(\"volume with type=image require Docker Engine %s or later\", dockerEngineV28)\n\t\t\t}\n\t\t}\n\t\tmounts = append(mounts, m)\n\t}\n\treturn binds, mounts, nil\n}\n\nfunc toBindString(name string, v *types.ServiceVolumeConfig) string {\n\taccess := \"rw\"\n\tif v.ReadOnly {\n\t\taccess = \"ro\"\n\t}\n\toptions := []string{access}\n\tif v.Bind != nil && v.Bind.SELinux != \"\" {\n\t\toptions = append(options, v.Bind.SELinux)\n\t}\n\tif v.Bind != nil && v.Bind.Propagation != \"\" {\n\t\toptions = append(options, v.Bind.Propagation)\n\t}\n\tif v.Volume != nil && v.Volume.NoCopy {\n\t\toptions = append(options, \"nocopy\")\n\t}\n\treturn fmt.Sprintf(\"%s:%s:%s\", name, v.Target, strings.Join(options, \",\"))\n}\n\nfunc findVolumeByName(volumes types.Volumes, name string) *types.VolumeConfig {\n\tfor _, vol := range volumes {\n\t\tif vol.Name == name {\n\t\t\treturn &vol\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc findVolumeByTarget(volumes []types.ServiceVolumeConfig, target string) *types.ServiceVolumeConfig {\n\tfor _, v := range volumes {\n\t\tif v.Target == target {\n\t\t\treturn &v\n\t\t}\n\t}\n\treturn nil\n}\n\n// bindRequiresMountAPI check if Bind declaration can be implemented by the plain old Bind API or uses any of the advanced\n// options which require use of Mount API\nfunc bindRequiresMountAPI(bind *types.ServiceVolumeBind) bool {\n\tswitch {\n\tcase bind == nil:\n\t\treturn false\n\tcase !bool(bind.CreateHostPath):\n\t\treturn true\n\tcase bind.Propagation != \"\":\n\t\treturn true\n\tcase bind.Recursive != \"\":\n\t\treturn true\n\tdefault:\n\t\treturn false\n\t}\n}\n\n// volumeRequiresMountAPI check if Volume declaration can be implemented by the plain old Bind API or uses any of the advanced\n// options which require use of Mount API\nfunc volumeRequiresMountAPI(vol *types.ServiceVolumeVolume) bool {\n\tswitch {\n\tcase vol == nil:\n\t\treturn false\n\tcase len(vol.Labels) > 0:\n\t\treturn true\n\tcase vol.Subpath != \"\":\n\t\treturn true\n\tcase vol.NoCopy:\n\t\treturn true\n\tdefault:\n\t\treturn false\n\t}\n}\n\nfunc (s *composeService) buildContainerMountOptions(ctx context.Context, p types.Project, service types.ServiceConfig, inherit *container.Summary) ([]mount.Mount, error) {\n\tmounts := map[string]mount.Mount{}\n\tif inherit != nil {\n\t\tfor _, m := range inherit.Mounts {\n\t\t\tif m.Type == \"tmpfs\" {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tsrc := m.Source\n\t\t\tif m.Type == \"volume\" {\n\t\t\t\tsrc = m.Name\n\t\t\t}\n\n\t\t\timg, err := s.apiClient().ImageInspect(ctx, api.GetImageNameOrDefault(service, p.Name))\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\n\t\t\tif img.Config != nil {\n\t\t\t\tif _, ok := img.Config.Volumes[m.Destination]; ok {\n\t\t\t\t\t// inherit previous container's anonymous volume\n\t\t\t\t\tmounts[m.Destination] = mount.Mount{\n\t\t\t\t\t\tType:     m.Type,\n\t\t\t\t\t\tSource:   src,\n\t\t\t\t\t\tTarget:   m.Destination,\n\t\t\t\t\t\tReadOnly: !m.RW,\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tvolumes := []types.ServiceVolumeConfig{}\n\t\t\tfor _, v := range service.Volumes {\n\t\t\t\tif v.Target != m.Destination || v.Source != \"\" {\n\t\t\t\t\tvolumes = append(volumes, v)\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\t// inherit previous container's anonymous volume\n\t\t\t\tmounts[m.Destination] = mount.Mount{\n\t\t\t\t\tType:     m.Type,\n\t\t\t\t\tSource:   src,\n\t\t\t\t\tTarget:   m.Destination,\n\t\t\t\t\tReadOnly: !m.RW,\n\t\t\t\t}\n\t\t\t}\n\t\t\tservice.Volumes = volumes\n\t\t}\n\t}\n\n\tmounts, err := fillBindMounts(p, service, mounts)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvalues := make([]mount.Mount, 0, len(mounts))\n\tfor _, v := range mounts {\n\t\tvalues = append(values, v)\n\t}\n\treturn values, nil\n}\n\nfunc fillBindMounts(p types.Project, s types.ServiceConfig, m map[string]mount.Mount) (map[string]mount.Mount, error) {\n\tfor _, v := range s.Volumes {\n\t\tbindMount, err := buildMount(p, v)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tm[bindMount.Target] = bindMount\n\t}\n\n\tsecrets, err := buildContainerSecretMounts(p, s)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tfor _, s := range secrets {\n\t\tif _, found := m[s.Target]; found {\n\t\t\tcontinue\n\t\t}\n\t\tm[s.Target] = s\n\t}\n\n\tconfigs, err := buildContainerConfigMounts(p, s)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tfor _, c := range configs {\n\t\tif _, found := m[c.Target]; found {\n\t\t\tcontinue\n\t\t}\n\t\tm[c.Target] = c\n\t}\n\treturn m, nil\n}\n\nfunc buildContainerConfigMounts(p types.Project, s types.ServiceConfig) ([]mount.Mount, error) {\n\tmounts := map[string]mount.Mount{}\n\n\tconfigsBaseDir := \"/\"\n\tfor _, config := range s.Configs {\n\t\ttarget := config.Target\n\t\tif config.Target == \"\" {\n\t\t\ttarget = configsBaseDir + config.Source\n\t\t} else if !isAbsTarget(config.Target) {\n\t\t\ttarget = configsBaseDir + config.Target\n\t\t}\n\n\t\tdefinedConfig := p.Configs[config.Source]\n\t\tif definedConfig.External {\n\t\t\treturn nil, fmt.Errorf(\"unsupported external config %s\", definedConfig.Name)\n\t\t}\n\n\t\tif definedConfig.Driver != \"\" {\n\t\t\treturn nil, errors.New(\"Docker Compose does not support configs.*.driver\") //nolint:staticcheck\n\t\t}\n\t\tif definedConfig.TemplateDriver != \"\" {\n\t\t\treturn nil, errors.New(\"Docker Compose does not support configs.*.template_driver\") //nolint:staticcheck\n\t\t}\n\n\t\tif definedConfig.Environment != \"\" || definedConfig.Content != \"\" {\n\t\t\tcontinue\n\t\t}\n\n\t\tif config.UID != \"\" || config.GID != \"\" || config.Mode != nil {\n\t\t\tlogrus.Warn(\"config `uid`, `gid` and `mode` are not supported, they will be ignored\")\n\t\t}\n\n\t\tbindMount, err := buildMount(p, types.ServiceVolumeConfig{\n\t\t\tType:     types.VolumeTypeBind,\n\t\t\tSource:   definedConfig.File,\n\t\t\tTarget:   target,\n\t\t\tReadOnly: true,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tmounts[target] = bindMount\n\t}\n\tvalues := make([]mount.Mount, 0, len(mounts))\n\tfor _, v := range mounts {\n\t\tvalues = append(values, v)\n\t}\n\treturn values, nil\n}\n\nfunc buildContainerSecretMounts(p types.Project, s types.ServiceConfig) ([]mount.Mount, error) {\n\tmounts := map[string]mount.Mount{}\n\n\tsecretsDir := \"/run/secrets/\"\n\tfor _, secret := range s.Secrets {\n\t\ttarget := secret.Target\n\t\tif secret.Target == \"\" {\n\t\t\ttarget = secretsDir + secret.Source\n\t\t} else if !isAbsTarget(secret.Target) {\n\t\t\ttarget = secretsDir + secret.Target\n\t\t}\n\n\t\tdefinedSecret := p.Secrets[secret.Source]\n\t\tif definedSecret.External {\n\t\t\treturn nil, fmt.Errorf(\"unsupported external secret %s\", definedSecret.Name)\n\t\t}\n\n\t\tif definedSecret.Driver != \"\" {\n\t\t\treturn nil, errors.New(\"Docker Compose does not support secrets.*.driver\") //nolint:staticcheck\n\t\t}\n\t\tif definedSecret.TemplateDriver != \"\" {\n\t\t\treturn nil, errors.New(\"Docker Compose does not support secrets.*.template_driver\") //nolint:staticcheck\n\t\t}\n\n\t\tif definedSecret.Environment != \"\" {\n\t\t\tcontinue\n\t\t}\n\n\t\tif secret.UID != \"\" || secret.GID != \"\" || secret.Mode != nil {\n\t\t\tlogrus.Warn(\"secrets `uid`, `gid` and `mode` are not supported, they will be ignored\")\n\t\t}\n\n\t\tif _, err := os.Stat(definedSecret.File); os.IsNotExist(err) {\n\t\t\tlogrus.Warnf(\"secret file %s does not exist\", definedSecret.Name)\n\t\t}\n\n\t\tmnt, err := buildMount(p, types.ServiceVolumeConfig{\n\t\t\tType:     types.VolumeTypeBind,\n\t\t\tSource:   definedSecret.File,\n\t\t\tTarget:   target,\n\t\t\tReadOnly: true,\n\t\t\tBind: &types.ServiceVolumeBind{\n\t\t\t\tCreateHostPath: false,\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tmounts[target] = mnt\n\t}\n\tvalues := make([]mount.Mount, 0, len(mounts))\n\tfor _, v := range mounts {\n\t\tvalues = append(values, v)\n\t}\n\treturn values, nil\n}\n\nfunc isAbsTarget(p string) bool {\n\treturn isUnixAbs(p) || isWindowsAbs(p)\n}\n\nfunc isUnixAbs(p string) bool {\n\treturn strings.HasPrefix(p, \"/\")\n}\n\nfunc isWindowsAbs(p string) bool {\n\treturn paths.IsWindowsAbs(p)\n}\n\nfunc buildMount(project types.Project, volume types.ServiceVolumeConfig) (mount.Mount, error) {\n\tsource := volume.Source\n\tswitch volume.Type {\n\tcase types.VolumeTypeBind:\n\t\tif !filepath.IsAbs(source) && !isUnixAbs(source) && !isWindowsAbs(source) {\n\t\t\t// volume source has already been prefixed with workdir if required, by compose-go project loader\n\t\t\tvar err error\n\t\t\tsource, err = filepath.Abs(source)\n\t\t\tif err != nil {\n\t\t\t\treturn mount.Mount{}, err\n\t\t\t}\n\t\t}\n\tcase types.VolumeTypeVolume:\n\t\tif volume.Source != \"\" {\n\t\t\tpVolume, ok := project.Volumes[volume.Source]\n\t\t\tif ok {\n\t\t\t\tsource = pVolume.Name\n\t\t\t}\n\t\t}\n\t}\n\n\tbind, vol, tmpfs, img := buildMountOptions(volume)\n\n\tif bind != nil {\n\t\tvolume.Type = types.VolumeTypeBind\n\t}\n\n\treturn mount.Mount{\n\t\tType:          mount.Type(volume.Type),\n\t\tSource:        source,\n\t\tTarget:        volume.Target,\n\t\tReadOnly:      volume.ReadOnly,\n\t\tConsistency:   mount.Consistency(volume.Consistency),\n\t\tBindOptions:   bind,\n\t\tVolumeOptions: vol,\n\t\tTmpfsOptions:  tmpfs,\n\t\tImageOptions:  img,\n\t}, nil\n}\n\nfunc buildMountOptions(volume types.ServiceVolumeConfig) (*mount.BindOptions, *mount.VolumeOptions, *mount.TmpfsOptions, *mount.ImageOptions) {\n\tif volume.Type != types.VolumeTypeBind && volume.Bind != nil {\n\t\tlogrus.Warnf(\"mount of type `%s` should not define `bind` option\", volume.Type)\n\t}\n\tif volume.Type != types.VolumeTypeVolume && volume.Volume != nil {\n\t\tlogrus.Warnf(\"mount of type `%s` should not define `volume` option\", volume.Type)\n\t}\n\tif volume.Type != types.VolumeTypeTmpfs && volume.Tmpfs != nil {\n\t\tlogrus.Warnf(\"mount of type `%s` should not define `tmpfs` option\", volume.Type)\n\t}\n\tif volume.Type != types.VolumeTypeImage && volume.Image != nil {\n\t\tlogrus.Warnf(\"mount of type `%s` should not define `image` option\", volume.Type)\n\t}\n\n\tswitch volume.Type {\n\tcase \"bind\":\n\t\treturn buildBindOption(volume.Bind), nil, nil, nil\n\tcase \"volume\":\n\t\treturn nil, buildVolumeOptions(volume.Volume), nil, nil\n\tcase \"tmpfs\":\n\t\treturn nil, nil, buildTmpfsOptions(volume.Tmpfs), nil\n\tcase \"image\":\n\t\treturn nil, nil, nil, buildImageOptions(volume.Image)\n\t}\n\treturn nil, nil, nil, nil\n}\n\nfunc buildBindOption(bind *types.ServiceVolumeBind) *mount.BindOptions {\n\tif bind == nil {\n\t\treturn nil\n\t}\n\topts := &mount.BindOptions{\n\t\tPropagation:      mount.Propagation(bind.Propagation),\n\t\tCreateMountpoint: bool(bind.CreateHostPath),\n\t}\n\tswitch bind.Recursive {\n\tcase \"disabled\":\n\t\topts.NonRecursive = true\n\tcase \"writable\":\n\t\topts.ReadOnlyNonRecursive = true\n\tcase \"readonly\":\n\t\topts.ReadOnlyForceRecursive = true\n\t}\n\treturn opts\n}\n\nfunc buildVolumeOptions(vol *types.ServiceVolumeVolume) *mount.VolumeOptions {\n\tif vol == nil {\n\t\treturn nil\n\t}\n\treturn &mount.VolumeOptions{\n\t\tNoCopy:  vol.NoCopy,\n\t\tSubpath: vol.Subpath,\n\t\tLabels:  vol.Labels,\n\t\t// DriverConfig: , // FIXME missing from model ?\n\t}\n}\n\nfunc buildTmpfsOptions(tmpfs *types.ServiceVolumeTmpfs) *mount.TmpfsOptions {\n\tif tmpfs == nil {\n\t\treturn nil\n\t}\n\treturn &mount.TmpfsOptions{\n\t\tSizeBytes: int64(tmpfs.Size),\n\t\tMode:      os.FileMode(tmpfs.Mode),\n\t}\n}\n\nfunc buildImageOptions(image *types.ServiceVolumeImage) *mount.ImageOptions {\n\tif image == nil {\n\t\treturn nil\n\t}\n\treturn &mount.ImageOptions{\n\t\tSubpath: image.SubPath,\n\t}\n}\n\nfunc (s *composeService) ensureNetwork(ctx context.Context, project *types.Project, name string, n *types.NetworkConfig) (string, error) {\n\tif n.External {\n\t\treturn s.resolveExternalNetwork(ctx, n)\n\t}\n\n\tid, err := s.resolveOrCreateNetwork(ctx, project, name, n)\n\tif errdefs.IsConflict(err) {\n\t\t// Maybe another execution of `docker compose up|run` created same network\n\t\t// let's retry once\n\t\treturn s.resolveOrCreateNetwork(ctx, project, name, n)\n\t}\n\treturn id, err\n}\n\nfunc (s *composeService) resolveOrCreateNetwork(ctx context.Context, project *types.Project, name string, n *types.NetworkConfig) (string, error) { //nolint:gocyclo\n\t// This is containers that could be left after a diverged network was removed\n\tvar dangledContainers Containers\n\n\t// First, try to find a unique network matching by name or ID\n\tres, err := s.apiClient().NetworkInspect(ctx, n.Name, client.NetworkInspectOptions{})\n\tif err == nil {\n\t\tinspect := res.Network\n\t\t// NetworkInspect will match on ID prefix, so double check we get the expected one\n\t\t// as looking for network named `db` we could erroneously match network ID `db9086999caf`\n\t\tif inspect.Name == n.Name || inspect.ID == n.Name {\n\t\t\tp, ok := inspect.Labels[api.ProjectLabel]\n\t\t\tif !ok {\n\t\t\t\tlogrus.Warnf(\"a network with name %s exists but was not created by compose.\\n\"+\n\t\t\t\t\t\"Set `external: true` to use an existing network\", n.Name)\n\t\t\t} else if p != project.Name {\n\t\t\t\tlogrus.Warnf(\"a network with name %s exists but was not created for project %q.\\n\"+\n\t\t\t\t\t\"Set `external: true` to use an existing network\", n.Name, project.Name)\n\t\t\t}\n\t\t\tif inspect.Labels[api.NetworkLabel] != name {\n\t\t\t\treturn \"\", fmt.Errorf(\n\t\t\t\t\t\"network %s was found but has incorrect label %s set to %q (expected: %q)\",\n\t\t\t\t\tn.Name,\n\t\t\t\t\tapi.NetworkLabel,\n\t\t\t\t\tinspect.Labels[api.NetworkLabel],\n\t\t\t\t\tname,\n\t\t\t\t)\n\t\t\t}\n\n\t\t\thash := inspect.Labels[api.ConfigHashLabel]\n\t\t\texpected, err := NetworkHash(n)\n\t\t\tif err != nil {\n\t\t\t\treturn \"\", err\n\t\t\t}\n\t\t\tif hash == \"\" || hash == expected {\n\t\t\t\treturn inspect.ID, nil\n\t\t\t}\n\n\t\t\tdangledContainers, err = s.removeDivergedNetwork(ctx, project, name, n)\n\t\t\tif err != nil {\n\t\t\t\treturn \"\", err\n\t\t\t}\n\t\t}\n\t}\n\t// ignore other errors. Typically, an ambiguous request by name results in some generic `invalidParameter` error\n\n\t// Either not found, or name is ambiguous - use NetworkList to list by name\n\tnwList, err := s.apiClient().NetworkList(ctx, client.NetworkListOptions{\n\t\tFilters: make(client.Filters).Add(\"name\", n.Name),\n\t})\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\t// NetworkList Matches all or part of a network name, so we have to filter for a strict match\n\tnetworks := slices.DeleteFunc(nwList.Items, func(net network.Summary) bool {\n\t\treturn net.Name != n.Name\n\t})\n\n\tfor _, nw := range networks {\n\t\tif nw.Labels[api.ProjectLabel] == project.Name &&\n\t\t\tnw.Labels[api.NetworkLabel] == name {\n\t\t\treturn nw.ID, nil\n\t\t}\n\t}\n\n\t// we could have set NetworkList with a projectFilter and networkFilter but not doing so allows to catch this\n\t// scenario were a network with same name exists but doesn't have label, and use of `CheckDuplicate: true`\n\t// prevents to create another one.\n\tif len(networks) > 0 {\n\t\tlogrus.Warnf(\"a network with name %s exists but was not created by compose.\\n\"+\n\t\t\t\"Set `external: true` to use an existing network\", n.Name)\n\t\treturn networks[0].ID, nil\n\t}\n\n\tvar ipam *network.IPAM\n\tif n.Ipam.Config != nil {\n\t\tvar config []network.IPAMConfig\n\t\tfor _, pool := range n.Ipam.Config {\n\t\t\tc, err := parseIPAMPool(pool)\n\t\t\tif err != nil {\n\t\t\t\treturn \"\", err\n\t\t\t}\n\t\t\tconfig = append(config, c)\n\t\t}\n\t\tipam = &network.IPAM{\n\t\t\tDriver: n.Ipam.Driver,\n\t\t\tConfig: config,\n\t\t}\n\t}\n\thash, err := NetworkHash(n)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tn.CustomLabels = n.CustomLabels.Add(api.ConfigHashLabel, hash)\n\tcreateOpts := client.NetworkCreateOptions{\n\t\tLabels:     mergeLabels(n.Labels, n.CustomLabels),\n\t\tDriver:     n.Driver,\n\t\tOptions:    n.DriverOpts,\n\t\tInternal:   n.Internal,\n\t\tAttachable: n.Attachable,\n\t\tIPAM:       ipam,\n\t\tEnableIPv6: n.EnableIPv6,\n\t\tEnableIPv4: n.EnableIPv4,\n\t}\n\n\tif n.Ipam.Driver != \"\" || len(n.Ipam.Config) > 0 {\n\t\tcreateOpts.IPAM = &network.IPAM{}\n\t}\n\n\tif n.Ipam.Driver != \"\" {\n\t\tcreateOpts.IPAM.Driver = n.Ipam.Driver\n\t}\n\n\tfor _, ipamConfig := range n.Ipam.Config {\n\t\tc, err := parseIPAMPool(ipamConfig)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\tcreateOpts.IPAM.Config = append(createOpts.IPAM.Config, c)\n\t}\n\n\tnetworkEventName := fmt.Sprintf(\"Network %s\", n.Name)\n\ts.events.On(creatingEvent(networkEventName))\n\n\tresp, err := s.apiClient().NetworkCreate(ctx, n.Name, createOpts)\n\tif err != nil {\n\t\ts.events.On(errorEvent(networkEventName, err.Error()))\n\t\treturn \"\", fmt.Errorf(\"failed to create network %s: %w\", n.Name, err)\n\t}\n\ts.events.On(createdEvent(networkEventName))\n\n\terr = s.connectNetwork(ctx, n.Name, dangledContainers, nil)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn resp.ID, nil\n}\n\nfunc (s *composeService) removeDivergedNetwork(ctx context.Context, project *types.Project, name string, n *types.NetworkConfig) (Containers, error) {\n\t// Remove services attached to this network to force recreation\n\tvar services []string\n\tfor _, service := range project.Services.Filter(func(config types.ServiceConfig) bool {\n\t\t_, ok := config.Networks[name]\n\t\treturn ok\n\t}) {\n\t\tservices = append(services, service.Name)\n\t}\n\n\t// Stop containers so we can remove network\n\t// They will be restarted (actually: recreated) with the updated network\n\terr := s.stop(ctx, project.Name, api.StopOptions{\n\t\tServices: services,\n\t\tProject:  project,\n\t}, nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tcontainers, err := s.getContainers(ctx, project.Name, oneOffExclude, true, services...)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\terr = s.disconnectNetwork(ctx, n.Name, containers)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t_, err = s.apiClient().NetworkRemove(ctx, n.Name, client.NetworkRemoveOptions{})\n\teventName := fmt.Sprintf(\"Network %s\", n.Name)\n\ts.events.On(removedEvent(eventName))\n\treturn containers, err\n}\n\nfunc (s *composeService) disconnectNetwork(\n\tctx context.Context,\n\tnwName string,\n\tcontainers Containers,\n) error {\n\tfor _, c := range containers {\n\t\t_, err := s.apiClient().NetworkDisconnect(ctx, nwName, client.NetworkDisconnectOptions{\n\t\t\tContainer: c.ID,\n\t\t\tForce:     true,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (s *composeService) connectNetwork(\n\tctx context.Context,\n\tnwName string,\n\tcontainers Containers,\n\tconfig *network.EndpointSettings,\n) error {\n\tfor _, c := range containers {\n\t\t_, err := s.apiClient().NetworkConnect(ctx, nwName, client.NetworkConnectOptions{\n\t\t\tContainer:      c.ID,\n\t\t\tEndpointConfig: config,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (s *composeService) resolveExternalNetwork(ctx context.Context, n *types.NetworkConfig) (string, error) {\n\t// NetworkInspect will match on ID prefix, so NetworkList with a name\n\t// filter is used to look for an exact match to prevent e.g. a network\n\t// named `db` from getting erroneously matched to a network with an ID\n\t// like `db9086999caf`\n\tres, err := s.apiClient().NetworkList(ctx, client.NetworkListOptions{\n\t\tFilters: make(client.Filters).Add(\"name\", n.Name),\n\t})\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tnetworks := res.Items\n\n\tif len(networks) == 0 {\n\t\t// in this instance, n.Name is really an ID\n\t\tsn, err := s.apiClient().NetworkInspect(ctx, n.Name, client.NetworkInspectOptions{})\n\t\tif err == nil {\n\t\t\tnetworks = append(networks, network.Summary{Network: sn.Network.Network})\n\t\t} else if !errdefs.IsNotFound(err) {\n\t\t\treturn \"\", err\n\t\t}\n\t}\n\n\t// NetworkList API doesn't return the exact name match, so we can retrieve more than one network with a request\n\tnetworks = slices.DeleteFunc(networks, func(net network.Summary) bool {\n\t\t// this function is called during the rebuild stage of `compose watch`.\n\t\t// we still require just one network back, but we need to run the search on the ID\n\t\treturn net.Name != n.Name && net.ID != n.Name\n\t})\n\n\tswitch len(networks) {\n\tcase 1:\n\t\treturn networks[0].ID, nil\n\tcase 0:\n\t\tenabled, err := s.isSwarmEnabled(ctx)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\tif enabled {\n\t\t\t// Swarm nodes do not register overlay networks that were\n\t\t\t// created on a different node unless they're in use.\n\t\t\t// So we can't preemptively check network exists, but\n\t\t\t// networkAttach will later fail anyway if network actually doesn't exist\n\t\t\treturn \"swarm\", nil\n\t\t}\n\t\treturn \"\", fmt.Errorf(\"network %s declared as external, but could not be found\", n.Name)\n\tdefault:\n\t\treturn \"\", fmt.Errorf(\"multiple networks with name %q were found. Use network ID as `name` to avoid ambiguity\", n.Name)\n\t}\n}\n\nfunc (s *composeService) ensureVolume(ctx context.Context, name string, volume types.VolumeConfig, project *types.Project) (string, error) {\n\tinspected, err := s.apiClient().VolumeInspect(ctx, volume.Name, client.VolumeInspectOptions{})\n\tif err != nil {\n\t\tif !errdefs.IsNotFound(err) {\n\t\t\treturn \"\", err\n\t\t}\n\t\tif volume.External {\n\t\t\treturn \"\", fmt.Errorf(\"external volume %q not found\", volume.Name)\n\t\t}\n\t\terr = s.createVolume(ctx, volume)\n\t\treturn volume.Name, err\n\t}\n\n\tif volume.External {\n\t\treturn volume.Name, nil\n\t}\n\n\t// Volume exists with name, but let's double-check this is the expected one\n\tp, ok := inspected.Volume.Labels[api.ProjectLabel]\n\tif !ok {\n\t\tlogrus.Warnf(\"volume %q already exists but was not created by Docker Compose. Use `external: true` to use an existing volume\", volume.Name)\n\t}\n\tif ok && p != project.Name {\n\t\tlogrus.Warnf(\"volume %q already exists but was created for project %q (expected %q). Use `external: true` to use an existing volume\", volume.Name, p, project.Name)\n\t}\n\n\texpected, err := VolumeHash(volume)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tactual, ok := inspected.Volume.Labels[api.ConfigHashLabel]\n\tif ok && actual != expected {\n\t\tmsg := fmt.Sprintf(\"Volume %q exists but doesn't match configuration in compose file. Recreate (data will be lost)?\", volume.Name)\n\t\tconfirm, err := s.prompt(msg, false)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\tif confirm {\n\t\t\terr = s.removeDivergedVolume(ctx, name, volume, project)\n\t\t\tif err != nil {\n\t\t\t\treturn \"\", err\n\t\t\t}\n\t\t\treturn volume.Name, s.createVolume(ctx, volume)\n\t\t}\n\t}\n\treturn inspected.Volume.Name, nil\n}\n\nfunc (s *composeService) removeDivergedVolume(ctx context.Context, name string, volume types.VolumeConfig, project *types.Project) error {\n\t// Remove services mounting divergent volume\n\tvar services []string\n\tfor _, service := range project.Services.Filter(func(config types.ServiceConfig) bool {\n\t\tfor _, cfg := range config.Volumes {\n\t\t\tif cfg.Source == name {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t\treturn false\n\t}) {\n\t\tservices = append(services, service.Name)\n\t}\n\n\terr := s.stop(ctx, project.Name, api.StopOptions{\n\t\tServices: services,\n\t\tProject:  project,\n\t}, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tcontainers, err := s.getContainers(ctx, project.Name, oneOffExclude, true, services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// FIXME (ndeloof) we have to remove container so we can recreate volume\n\t// but doing so we can't inherit anonymous volumes from previous instance\n\terr = s.remove(ctx, containers, api.RemoveOptions{\n\t\tServices: services,\n\t\tProject:  project,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t_, err = s.apiClient().VolumeRemove(ctx, volume.Name, client.VolumeRemoveOptions{\n\t\tForce: true,\n\t})\n\treturn err\n}\n\nfunc (s *composeService) createVolume(ctx context.Context, volume types.VolumeConfig) error {\n\teventName := fmt.Sprintf(\"Volume %s\", volume.Name)\n\ts.events.On(creatingEvent(eventName))\n\thash, err := VolumeHash(volume)\n\tif err != nil {\n\t\treturn err\n\t}\n\tvolume.CustomLabels.Add(api.ConfigHashLabel, hash)\n\t_, err = s.apiClient().VolumeCreate(ctx, client.VolumeCreateOptions{\n\t\tLabels:     mergeLabels(volume.Labels, volume.CustomLabels),\n\t\tName:       volume.Name,\n\t\tDriver:     volume.Driver,\n\t\tDriverOpts: volume.DriverOpts,\n\t})\n\tif err != nil {\n\t\ts.events.On(errorEvent(eventName, err.Error()))\n\t\treturn err\n\t}\n\ts.events.On(createdEvent(eventName))\n\treturn nil\n}\n\nfunc parseIPAMPool(pool *types.IPAMPool) (network.IPAMConfig, error) {\n\tvar (\n\t\terr        error\n\t\tsubNet     netip.Prefix\n\t\tipRange    netip.Prefix\n\t\tgateway    netip.Addr\n\t\tauxAddress map[string]netip.Addr\n\t)\n\tif pool.Subnet != \"\" {\n\t\tsubNet, err = netip.ParsePrefix(pool.Subnet)\n\t\tif err != nil {\n\t\t\treturn network.IPAMConfig{}, fmt.Errorf(\"invalid subnet: %w\", err)\n\t\t}\n\t}\n\tif pool.IPRange != \"\" {\n\t\tipRange, err = netip.ParsePrefix(pool.IPRange)\n\t\tif err != nil {\n\t\t\treturn network.IPAMConfig{}, fmt.Errorf(\"invalid ip-range: %w\", err)\n\t\t}\n\t}\n\tif pool.Gateway != \"\" {\n\t\tgateway, err = netip.ParseAddr(pool.Gateway)\n\t\tif err != nil {\n\t\t\treturn network.IPAMConfig{}, fmt.Errorf(\"invalid gateway address: %w\", err)\n\t\t}\n\t}\n\tif len(pool.AuxiliaryAddresses) > 0 {\n\t\tauxAddress = make(map[string]netip.Addr, len(pool.AuxiliaryAddresses))\n\t\tfor auxName, addr := range pool.AuxiliaryAddresses {\n\t\t\tauxAddr, err := netip.ParseAddr(addr)\n\t\t\tif err != nil {\n\t\t\t\treturn network.IPAMConfig{}, fmt.Errorf(\"invalid auxiliary address: %w\", err)\n\t\t\t}\n\t\t\tauxAddress[auxName] = auxAddr\n\t\t}\n\n\t}\n\treturn network.IPAMConfig{\n\t\tSubnet:     subNet,\n\t\tIPRange:    ipRange,\n\t\tGateway:    gateway,\n\t\tAuxAddress: auxAddress,\n\t}, nil\n}\n\nfunc parseMACAddr(macAddress string) (network.HardwareAddr, error) {\n\tif macAddress == \"\" {\n\t\treturn nil, nil\n\t}\n\tm, err := net.ParseMAC(macAddress)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"invalid MAC address: %w\", err)\n\t}\n\treturn network.HardwareAddr(m), nil\n}\n"
  },
  {
    "path": "pkg/compose/create_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"net\"\n\t\"net/netip\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sort\"\n\t\"testing\"\n\n\tcomposeloader \"github.com/compose-spec/compose-go/v2/loader\"\n\tcomposetypes \"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/google/go-cmp/cmp/cmpopts\"\n\t\"github.com/moby/moby/api/types/container\"\n\tmountTypes \"github.com/moby/moby/api/types/mount\"\n\t\"github.com/moby/moby/api/types/network\"\n\t\"github.com/moby/moby/client\"\n\t\"go.uber.org/mock/gomock\"\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/assert/cmp\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc TestBuildBindMount(t *testing.T) {\n\tproject := composetypes.Project{}\n\tvolume := composetypes.ServiceVolumeConfig{\n\t\tType:   composetypes.VolumeTypeBind,\n\t\tSource: \"\",\n\t\tTarget: \"/data\",\n\t}\n\tmount, err := buildMount(project, volume)\n\tassert.NilError(t, err)\n\tassert.Assert(t, filepath.IsAbs(mount.Source))\n\t_, err = os.Stat(mount.Source)\n\tassert.NilError(t, err)\n\tassert.Equal(t, mount.Type, mountTypes.TypeBind)\n}\n\nfunc TestBuildNamedPipeMount(t *testing.T) {\n\tproject := composetypes.Project{}\n\tvolume := composetypes.ServiceVolumeConfig{\n\t\tType:   composetypes.VolumeTypeNamedPipe,\n\t\tSource: \"\\\\\\\\.\\\\pipe\\\\docker_engine_windows\",\n\t\tTarget: \"\\\\\\\\.\\\\pipe\\\\docker_engine\",\n\t}\n\tmount, err := buildMount(project, volume)\n\tassert.NilError(t, err)\n\tassert.Equal(t, mount.Type, mountTypes.TypeNamedPipe)\n}\n\nfunc TestBuildVolumeMount(t *testing.T) {\n\tproject := composetypes.Project{\n\t\tName: \"myProject\",\n\t\tVolumes: composetypes.Volumes(map[string]composetypes.VolumeConfig{\n\t\t\t\"myVolume\": {\n\t\t\t\tName: \"myProject_myVolume\",\n\t\t\t},\n\t\t}),\n\t}\n\tvolume := composetypes.ServiceVolumeConfig{\n\t\tType:   composetypes.VolumeTypeVolume,\n\t\tSource: \"myVolume\",\n\t\tTarget: \"/data\",\n\t}\n\tmount, err := buildMount(project, volume)\n\tassert.NilError(t, err)\n\tassert.Equal(t, mount.Source, \"myProject_myVolume\")\n\tassert.Equal(t, mount.Type, mountTypes.TypeVolume)\n}\n\nfunc TestServiceImageName(t *testing.T) {\n\tassert.Equal(t, api.GetImageNameOrDefault(composetypes.ServiceConfig{Image: \"myImage\"}, \"myProject\"), \"myImage\")\n\tassert.Equal(t, api.GetImageNameOrDefault(composetypes.ServiceConfig{Name: \"aService\"}, \"myProject\"), \"myProject-aService\")\n}\n\nfunc TestPrepareNetworkLabels(t *testing.T) {\n\tproject := composetypes.Project{\n\t\tName:     \"myProject\",\n\t\tNetworks: composetypes.Networks(map[string]composetypes.NetworkConfig{\"skynet\": {}}),\n\t}\n\tprepareNetworks(&project)\n\tassert.DeepEqual(t, project.Networks[\"skynet\"].CustomLabels, composetypes.Labels(map[string]string{\n\t\t\"com.docker.compose.network\": \"skynet\",\n\t\t\"com.docker.compose.project\": \"myProject\",\n\t\t\"com.docker.compose.version\": api.ComposeVersion,\n\t}))\n}\n\nfunc TestBuildContainerMountOptions(t *testing.T) {\n\tproject := composetypes.Project{\n\t\tName: \"myProject\",\n\t\tServices: composetypes.Services{\n\t\t\t\"myService\": {\n\t\t\t\tName: \"myService\",\n\t\t\t\tVolumes: []composetypes.ServiceVolumeConfig{\n\t\t\t\t\t{\n\t\t\t\t\t\tType:   composetypes.VolumeTypeVolume,\n\t\t\t\t\t\tTarget: \"/var/myvolume1\",\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tType:   composetypes.VolumeTypeVolume,\n\t\t\t\t\t\tTarget: \"/var/myvolume2\",\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tType:   composetypes.VolumeTypeVolume,\n\t\t\t\t\t\tSource: \"myVolume3\",\n\t\t\t\t\t\tTarget: \"/var/myvolume3\",\n\t\t\t\t\t\tVolume: &composetypes.ServiceVolumeVolume{\n\t\t\t\t\t\t\tSubpath: \"etc\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tType:   composetypes.VolumeTypeNamedPipe,\n\t\t\t\t\t\tSource: \"\\\\\\\\.\\\\pipe\\\\docker_engine_windows\",\n\t\t\t\t\t\tTarget: \"\\\\\\\\.\\\\pipe\\\\docker_engine\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\tVolumes: composetypes.Volumes(map[string]composetypes.VolumeConfig{\n\t\t\t\"myVolume1\": {\n\t\t\t\tName: \"myProject_myVolume1\",\n\t\t\t},\n\t\t\t\"myVolume2\": {\n\t\t\t\tName: \"myProject_myVolume2\",\n\t\t\t},\n\t\t}),\n\t}\n\n\tinherit := &container.Summary{\n\t\tMounts: []container.MountPoint{\n\t\t\t{\n\t\t\t\tType:        composetypes.VolumeTypeVolume,\n\t\t\t\tDestination: \"/var/myvolume1\",\n\t\t\t},\n\t\t\t{\n\t\t\t\tType:        composetypes.VolumeTypeVolume,\n\t\t\t\tDestination: \"/var/myvolume2\",\n\t\t\t},\n\t\t},\n\t}\n\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tmock, cli := prepareMocks(mockCtrl)\n\ts := composeService{\n\t\tdockerCli: cli,\n\t}\n\tmock.EXPECT().ImageInspect(gomock.Any(), \"myProject-myService\").AnyTimes().Return(client.ImageInspectResult{}, nil)\n\n\tmounts, err := s.buildContainerMountOptions(t.Context(), project, project.Services[\"myService\"], inherit)\n\tsort.Slice(mounts, func(i, j int) bool {\n\t\treturn mounts[i].Target < mounts[j].Target\n\t})\n\tassert.NilError(t, err)\n\tassert.Assert(t, len(mounts) == 4)\n\tassert.Equal(t, mounts[0].Target, \"/var/myvolume1\")\n\tassert.Equal(t, mounts[1].Target, \"/var/myvolume2\")\n\tassert.Equal(t, mounts[2].Target, \"/var/myvolume3\")\n\tassert.Equal(t, mounts[2].VolumeOptions.Subpath, \"etc\")\n\tassert.Equal(t, mounts[3].Target, \"\\\\\\\\.\\\\pipe\\\\docker_engine\")\n\n\tmounts, err = s.buildContainerMountOptions(t.Context(), project, project.Services[\"myService\"], inherit)\n\tsort.Slice(mounts, func(i, j int) bool {\n\t\treturn mounts[i].Target < mounts[j].Target\n\t})\n\tassert.NilError(t, err)\n\tassert.Assert(t, len(mounts) == 4)\n\tassert.Equal(t, mounts[0].Target, \"/var/myvolume1\")\n\tassert.Equal(t, mounts[1].Target, \"/var/myvolume2\")\n\tassert.Equal(t, mounts[2].Target, \"/var/myvolume3\")\n\tassert.Equal(t, mounts[2].VolumeOptions.Subpath, \"etc\")\n\tassert.Equal(t, mounts[3].Target, \"\\\\\\\\.\\\\pipe\\\\docker_engine\")\n}\n\nfunc TestDefaultNetworkSettings(t *testing.T) {\n\tt.Run(\"returns the network with the highest priority as primary when service has multiple networks\", func(t *testing.T) {\n\t\tservice := composetypes.ServiceConfig{\n\t\t\tName: \"myService\",\n\t\t\tNetworks: map[string]*composetypes.ServiceNetworkConfig{\n\t\t\t\t\"myNetwork1\": {\n\t\t\t\t\tPriority: 10,\n\t\t\t\t},\n\t\t\t\t\"myNetwork2\": {\n\t\t\t\t\tPriority: 1000,\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t\tproject := composetypes.Project{\n\t\t\tName: \"myProject\",\n\t\t\tServices: composetypes.Services{\n\t\t\t\t\"myService\": service,\n\t\t\t},\n\t\t\tNetworks: composetypes.Networks(map[string]composetypes.NetworkConfig{\n\t\t\t\t\"myNetwork1\": {\n\t\t\t\t\tName: \"myProject_myNetwork1\",\n\t\t\t\t},\n\t\t\t\t\"myNetwork2\": {\n\t\t\t\t\tName: \"myProject_myNetwork2\",\n\t\t\t\t},\n\t\t\t}),\n\t\t}\n\n\t\tnetworkMode, networkConfig, err := defaultNetworkSettings(&project, service, 1, nil, true, \"1.44\")\n\t\tassert.NilError(t, err)\n\t\tassert.Equal(t, string(networkMode), \"myProject_myNetwork2\")\n\t\tassert.Check(t, cmp.Len(networkConfig.EndpointsConfig, 2))\n\t\tassert.Check(t, cmp.Contains(networkConfig.EndpointsConfig, \"myProject_myNetwork1\"))\n\t\tassert.Check(t, cmp.Contains(networkConfig.EndpointsConfig, \"myProject_myNetwork2\"))\n\t})\n\n\tt.Run(\"returns default network when service has no networks\", func(t *testing.T) {\n\t\tservice := composetypes.ServiceConfig{\n\t\t\tName: \"myService\",\n\t\t}\n\t\tproject := composetypes.Project{\n\t\t\tName: \"myProject\",\n\t\t\tServices: composetypes.Services{\n\t\t\t\t\"myService\": service,\n\t\t\t},\n\t\t\tNetworks: composetypes.Networks(map[string]composetypes.NetworkConfig{\n\t\t\t\t\"myNetwork1\": {\n\t\t\t\t\tName: \"myProject_myNetwork1\",\n\t\t\t\t},\n\t\t\t\t\"myNetwork2\": {\n\t\t\t\t\tName: \"myProject_myNetwork2\",\n\t\t\t\t},\n\t\t\t\t\"default\": {\n\t\t\t\t\tName: \"myProject_default\",\n\t\t\t\t},\n\t\t\t}),\n\t\t}\n\n\t\tnetworkMode, networkConfig, err := defaultNetworkSettings(&project, service, 1, nil, true, \"1.44\")\n\t\tassert.NilError(t, err)\n\t\tassert.Equal(t, string(networkMode), \"myProject_default\")\n\t\tassert.Check(t, cmp.Len(networkConfig.EndpointsConfig, 1))\n\t\tassert.Check(t, cmp.Contains(networkConfig.EndpointsConfig, \"myProject_default\"))\n\t})\n\n\tt.Run(\"returns none if project has no networks\", func(t *testing.T) {\n\t\tservice := composetypes.ServiceConfig{\n\t\t\tName: \"myService\",\n\t\t}\n\t\tproject := composetypes.Project{\n\t\t\tName: \"myProject\",\n\t\t\tServices: composetypes.Services{\n\t\t\t\t\"myService\": service,\n\t\t\t},\n\t\t}\n\n\t\tnetworkMode, networkConfig, err := defaultNetworkSettings(&project, service, 1, nil, true, \"1.44\")\n\t\tassert.NilError(t, err)\n\t\tassert.Equal(t, string(networkMode), \"none\")\n\t\tassert.Check(t, cmp.Nil(networkConfig))\n\t})\n\n\tt.Run(\"returns defined network mode if explicitly set\", func(t *testing.T) {\n\t\tservice := composetypes.ServiceConfig{\n\t\t\tName:        \"myService\",\n\t\t\tNetworkMode: \"host\",\n\t\t}\n\t\tproject := composetypes.Project{\n\t\t\tName:     \"myProject\",\n\t\t\tServices: composetypes.Services{\"myService\": service},\n\t\t\tNetworks: composetypes.Networks(map[string]composetypes.NetworkConfig{\n\t\t\t\t\"default\": {\n\t\t\t\t\tName: \"myProject_default\",\n\t\t\t\t},\n\t\t\t}),\n\t\t}\n\n\t\tnetworkMode, networkConfig, err := defaultNetworkSettings(&project, service, 1, nil, true, \"1.44\")\n\t\tassert.NilError(t, err)\n\t\tassert.Equal(t, string(networkMode), \"host\")\n\t\tassert.Check(t, cmp.Nil(networkConfig))\n\t})\n}\n\nfunc TestCreateEndpointSettings(t *testing.T) {\n\teps, err := createEndpointSettings(&composetypes.Project{\n\t\tName: \"projName\",\n\t}, composetypes.ServiceConfig{\n\t\tName:          \"serviceName\",\n\t\tContainerName: \"containerName\",\n\t\tNetworks: map[string]*composetypes.ServiceNetworkConfig{\n\t\t\t\"netName\": {\n\t\t\t\tPriority:     100,\n\t\t\t\tAliases:      []string{\"alias1\", \"alias2\"},\n\t\t\t\tIpv4Address:  \"10.16.17.18\",\n\t\t\t\tIpv6Address:  \"fdb4:7a7f:373a:3f0c::42\",\n\t\t\t\tLinkLocalIPs: []string{\"169.254.10.20\"},\n\t\t\t\tMacAddress:   \"02:00:00:00:00:01\",\n\t\t\t\tDriverOpts: composetypes.Options{\n\t\t\t\t\t\"driverOpt1\": \"optval1\",\n\t\t\t\t\t\"driverOpt2\": \"optval2\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}, 0, \"netName\", []string{\"link1\", \"link2\"}, true)\n\tassert.NilError(t, err)\n\tmacAddr, _ := net.ParseMAC(\"02:00:00:00:00:01\")\n\tassert.Check(t, cmp.DeepEqual(eps, &network.EndpointSettings{\n\t\tIPAMConfig: &network.EndpointIPAMConfig{\n\t\t\tIPv4Address:  netip.MustParseAddr(\"10.16.17.18\").Unmap(),\n\t\t\tIPv6Address:  netip.MustParseAddr(\"fdb4:7a7f:373a:3f0c::42\"),\n\t\t\tLinkLocalIPs: []netip.Addr{netip.MustParseAddr(\"169.254.10.20\").Unmap()},\n\t\t},\n\t\tLinks:      []string{\"link1\", \"link2\"},\n\t\tAliases:    []string{\"containerName\", \"serviceName\", \"alias1\", \"alias2\"},\n\t\tMacAddress: network.HardwareAddr(macAddr),\n\t\tDriverOpts: map[string]string{\n\t\t\t\"driverOpt1\": \"optval1\",\n\t\t\t\"driverOpt2\": \"optval2\",\n\t\t},\n\n\t\t// FIXME(robmry) - IPAddress and IPv6Gateway are \"operational data\" fields...\n\t\t//  - The IPv6 address here is the container's address, not the gateway.\n\t\t//  - Both fields will be cleared by the daemon, but they could be removed from\n\t\t//    the request.\n\t\tIPAddress:   netip.MustParseAddr(\"10.16.17.18\").Unmap(),\n\t\tIPv6Gateway: netip.MustParseAddr(\"fdb4:7a7f:373a:3f0c::42\"),\n\t}, cmpopts.EquateComparable(netip.Addr{})))\n}\n\nfunc Test_buildContainerVolumes(t *testing.T) {\n\tpwd, err := os.Getwd()\n\tassert.NilError(t, err)\n\n\ttests := []struct {\n\t\tname   string\n\t\tyaml   string\n\t\tbinds  []string\n\t\tmounts []mountTypes.Mount\n\t}{\n\t\t{\n\t\t\tname: \"bind mount local path\",\n\t\t\tyaml: `\nservices:\n  test:\n    volumes:\n      - ./data:/data\n`,\n\t\t\tbinds:  []string{filepath.Join(pwd, \"data\") + \":/data:rw\"},\n\t\t\tmounts: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"bind mount, not create host path\",\n\t\t\tyaml: `\nservices:\n  test:\n    volumes:\n      - type: bind\n        source: ./data\n        target: /data\n        bind:\n          create_host_path: false\n`,\n\t\t\tbinds: nil,\n\t\t\tmounts: []mountTypes.Mount{\n\t\t\t\t{\n\t\t\t\t\tType:        \"bind\",\n\t\t\t\t\tSource:      filepath.Join(pwd, \"data\"),\n\t\t\t\t\tTarget:      \"/data\",\n\t\t\t\t\tBindOptions: &mountTypes.BindOptions{CreateMountpoint: false},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"mount volume\",\n\t\t\tyaml: `\nservices:\n  test:\n    volumes:\n      - data:/data\nvolumes:\n  data:\n    name: my_volume\n`,\n\t\t\tbinds:  []string{\"my_volume:/data:rw\"},\n\t\t\tmounts: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"mount volume, readonly\",\n\t\t\tyaml: `\nservices:\n  test:\n    volumes:\n      - data:/data:ro\nvolumes:\n  data:\n    name: my_volume\n`,\n\t\t\tbinds:  []string{\"my_volume:/data:ro\"},\n\t\t\tmounts: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"mount volume subpath\",\n\t\t\tyaml: `\nservices:\n  test:\n    volumes:\n      - type: volume\n        source: data\n        target: /data\n        volume:\n          subpath: test/\nvolumes:\n  data: \n    name: my_volume\n`,\n\t\t\tbinds: nil,\n\t\t\tmounts: []mountTypes.Mount{\n\t\t\t\t{\n\t\t\t\t\tType:          \"volume\",\n\t\t\t\t\tSource:        \"my_volume\",\n\t\t\t\t\tTarget:        \"/data\",\n\t\t\t\t\tVolumeOptions: &mountTypes.VolumeOptions{Subpath: \"test/\"},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tp, err := composeloader.LoadWithContext(t.Context(), composetypes.ConfigDetails{\n\t\t\t\tConfigFiles: []composetypes.ConfigFile{\n\t\t\t\t\t{\n\t\t\t\t\t\tFilename: \"test\",\n\t\t\t\t\t\tContent:  []byte(tt.yaml),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t}, func(options *composeloader.Options) {\n\t\t\t\toptions.SkipValidation = true\n\t\t\t\toptions.SkipConsistencyCheck = true\n\t\t\t})\n\t\t\tassert.NilError(t, err)\n\t\t\ts := &composeService{}\n\t\t\tbinds, mounts, err := s.buildContainerVolumes(t.Context(), *p, p.Services[\"test\"], nil)\n\t\t\tassert.NilError(t, err)\n\t\t\tassert.DeepEqual(t, tt.binds, binds)\n\t\t\tassert.DeepEqual(t, tt.mounts, mounts)\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/dependencies.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"slices\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\n// ServiceStatus indicates the status of a service\ntype ServiceStatus int\n\n// Services status flags\nconst (\n\tServiceStopped ServiceStatus = iota\n\tServiceStarted\n)\n\ntype graphTraversal struct {\n\tmu      sync.Mutex\n\tseen    map[string]struct{}\n\tignored map[string]struct{}\n\n\textremityNodesFn            func(*Graph) []*Vertex                        // leaves or roots\n\tadjacentNodesFn             func(*Vertex) []*Vertex                       // getParents or getChildren\n\tfilterAdjacentByStatusFn    func(*Graph, string, ServiceStatus) []*Vertex // filterChildren or filterParents\n\ttargetServiceStatus         ServiceStatus\n\tadjacentServiceStatusToSkip ServiceStatus\n\n\tvisitorFn      func(context.Context, string) error\n\tmaxConcurrency int\n}\n\nfunc upDirectionTraversal(visitorFn func(context.Context, string) error) *graphTraversal {\n\treturn &graphTraversal{\n\t\textremityNodesFn:            leaves,\n\t\tadjacentNodesFn:             getParents,\n\t\tfilterAdjacentByStatusFn:    filterChildren,\n\t\tadjacentServiceStatusToSkip: ServiceStopped,\n\t\ttargetServiceStatus:         ServiceStarted,\n\t\tvisitorFn:                   visitorFn,\n\t}\n}\n\nfunc downDirectionTraversal(visitorFn func(context.Context, string) error) *graphTraversal {\n\treturn &graphTraversal{\n\t\textremityNodesFn:            roots,\n\t\tadjacentNodesFn:             getChildren,\n\t\tfilterAdjacentByStatusFn:    filterParents,\n\t\tadjacentServiceStatusToSkip: ServiceStarted,\n\t\ttargetServiceStatus:         ServiceStopped,\n\t\tvisitorFn:                   visitorFn,\n\t}\n}\n\n// InDependencyOrder applies the function to the services of the project taking in account the dependency order\nfunc InDependencyOrder(ctx context.Context, project *types.Project, fn func(context.Context, string) error, options ...func(*graphTraversal)) error {\n\tgraph, err := NewGraph(project, ServiceStopped)\n\tif err != nil {\n\t\treturn err\n\t}\n\tt := upDirectionTraversal(fn)\n\tfor _, option := range options {\n\t\toption(t)\n\t}\n\treturn t.visit(ctx, graph)\n}\n\n// InReverseDependencyOrder applies the function to the services of the project in reverse order of dependencies\nfunc InReverseDependencyOrder(ctx context.Context, project *types.Project, fn func(context.Context, string) error, options ...func(*graphTraversal)) error {\n\tgraph, err := NewGraph(project, ServiceStarted)\n\tif err != nil {\n\t\treturn err\n\t}\n\tt := downDirectionTraversal(fn)\n\tfor _, option := range options {\n\t\toption(t)\n\t}\n\treturn t.visit(ctx, graph)\n}\n\nfunc WithRootNodesAndDown(nodes []string) func(*graphTraversal) {\n\treturn func(t *graphTraversal) {\n\t\tif len(nodes) == 0 {\n\t\t\treturn\n\t\t}\n\t\toriginalFn := t.extremityNodesFn\n\t\tt.extremityNodesFn = func(graph *Graph) []*Vertex {\n\t\t\tvar want []string\n\t\t\tfor _, node := range nodes {\n\t\t\t\tvertex := graph.Vertices[node]\n\t\t\t\twant = append(want, vertex.Service)\n\t\t\t\tfor _, v := range getAncestors(vertex) {\n\t\t\t\t\twant = append(want, v.Service)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tt.ignored = map[string]struct{}{}\n\t\t\tfor k := range graph.Vertices {\n\t\t\t\tif !slices.Contains(want, k) {\n\t\t\t\t\tt.ignored[k] = struct{}{}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn originalFn(graph)\n\t\t}\n\t}\n}\n\nfunc (t *graphTraversal) visit(ctx context.Context, g *Graph) error {\n\texpect := len(g.Vertices)\n\tif expect == 0 {\n\t\treturn nil\n\t}\n\n\teg, ctx := errgroup.WithContext(ctx)\n\tif t.maxConcurrency > 0 {\n\t\teg.SetLimit(t.maxConcurrency + 1)\n\t}\n\tnodeCh := make(chan *Vertex, expect)\n\tdefer close(nodeCh)\n\t// nodeCh need to allow n=expect writers while reader goroutine could have returner after ctx.Done\n\teg.Go(func() error {\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-ctx.Done():\n\t\t\t\treturn nil\n\t\t\tcase node := <-nodeCh:\n\t\t\t\texpect--\n\t\t\t\tif expect == 0 {\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\t\t\t\tt.run(ctx, g, eg, t.adjacentNodesFn(node), nodeCh)\n\t\t\t}\n\t\t}\n\t})\n\n\tnodes := t.extremityNodesFn(g)\n\tt.run(ctx, g, eg, nodes, nodeCh)\n\n\treturn eg.Wait()\n}\n\n// Note: this could be `graph.walk` or whatever\nfunc (t *graphTraversal) run(ctx context.Context, graph *Graph, eg *errgroup.Group, nodes []*Vertex, nodeCh chan *Vertex) {\n\tfor _, node := range nodes {\n\t\t// Don't start this service yet if all of its children have\n\t\t// not been started yet.\n\t\tif len(t.filterAdjacentByStatusFn(graph, node.Key, t.adjacentServiceStatusToSkip)) != 0 {\n\t\t\tcontinue\n\t\t}\n\n\t\tif !t.consume(node.Key) {\n\t\t\t// another worker already visited this node\n\t\t\tcontinue\n\t\t}\n\n\t\teg.Go(func() error {\n\t\t\tvar err error\n\t\t\tif _, ignore := t.ignored[node.Service]; !ignore {\n\t\t\t\terr = t.visitorFn(ctx, node.Service)\n\t\t\t}\n\t\t\tif err == nil {\n\t\t\t\tgraph.UpdateStatus(node.Key, t.targetServiceStatus)\n\t\t\t}\n\t\t\tnodeCh <- node\n\t\t\treturn err\n\t\t})\n\t}\n}\n\nfunc (t *graphTraversal) consume(nodeKey string) bool {\n\tt.mu.Lock()\n\tdefer t.mu.Unlock()\n\tif t.seen == nil {\n\t\tt.seen = make(map[string]struct{})\n\t}\n\tif _, ok := t.seen[nodeKey]; ok {\n\t\treturn false\n\t}\n\tt.seen[nodeKey] = struct{}{}\n\treturn true\n}\n\n// Graph represents project as service dependencies\ntype Graph struct {\n\tVertices map[string]*Vertex\n\tlock     sync.RWMutex\n}\n\n// Vertex represents a service in the dependencies structure\ntype Vertex struct {\n\tKey      string\n\tService  string\n\tStatus   ServiceStatus\n\tChildren map[string]*Vertex\n\tParents  map[string]*Vertex\n}\n\nfunc getParents(v *Vertex) []*Vertex {\n\treturn v.GetParents()\n}\n\n// GetParents returns a slice with the parent vertices of the Vertex\nfunc (v *Vertex) GetParents() []*Vertex {\n\tvar res []*Vertex\n\tfor _, p := range v.Parents {\n\t\tres = append(res, p)\n\t}\n\treturn res\n}\n\nfunc getChildren(v *Vertex) []*Vertex {\n\treturn v.GetChildren()\n}\n\n// getAncestors return all descendents for a vertex, might contain duplicates\nfunc getAncestors(v *Vertex) []*Vertex {\n\tvar descendents []*Vertex\n\tfor _, parent := range v.GetParents() {\n\t\tdescendents = append(descendents, parent)\n\t\tdescendents = append(descendents, getAncestors(parent)...)\n\t}\n\treturn descendents\n}\n\n// GetChildren returns a slice with the child vertices of the Vertex\nfunc (v *Vertex) GetChildren() []*Vertex {\n\tvar res []*Vertex\n\tfor _, p := range v.Children {\n\t\tres = append(res, p)\n\t}\n\treturn res\n}\n\n// NewGraph returns the dependency graph of the services\nfunc NewGraph(project *types.Project, initialStatus ServiceStatus) (*Graph, error) {\n\tgraph := &Graph{\n\t\tlock:     sync.RWMutex{},\n\t\tVertices: map[string]*Vertex{},\n\t}\n\n\tfor _, s := range project.Services {\n\t\tgraph.AddVertex(s.Name, s.Name, initialStatus)\n\t}\n\n\tfor index, s := range project.Services {\n\t\tfor _, name := range s.GetDependencies() {\n\t\t\terr := graph.AddEdge(s.Name, name)\n\t\t\tif err != nil {\n\t\t\t\tif !s.DependsOn[name].Required {\n\t\t\t\t\tdelete(s.DependsOn, name)\n\t\t\t\t\tproject.Services[index] = s\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tif api.IsNotFoundError(err) {\n\t\t\t\t\tds, err := project.GetDisabledService(name)\n\t\t\t\t\tif err == nil {\n\t\t\t\t\t\treturn nil, fmt.Errorf(\"service %s is required by %s but is disabled. Can be enabled by profiles %s\", name, s.Name, ds.Profiles)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t}\n\n\tif b, err := graph.HasCycles(); b {\n\t\treturn nil, err\n\t}\n\n\treturn graph, nil\n}\n\n// NewVertex is the constructor function for the Vertex\nfunc NewVertex(key string, service string, initialStatus ServiceStatus) *Vertex {\n\treturn &Vertex{\n\t\tKey:      key,\n\t\tService:  service,\n\t\tStatus:   initialStatus,\n\t\tParents:  map[string]*Vertex{},\n\t\tChildren: map[string]*Vertex{},\n\t}\n}\n\n// AddVertex adds a vertex to the Graph\nfunc (g *Graph) AddVertex(key string, service string, initialStatus ServiceStatus) {\n\tg.lock.Lock()\n\tdefer g.lock.Unlock()\n\n\tv := NewVertex(key, service, initialStatus)\n\tg.Vertices[key] = v\n}\n\n// AddEdge adds a relationship of dependency between vertices `source` and `destination`\nfunc (g *Graph) AddEdge(source string, destination string) error {\n\tg.lock.Lock()\n\tdefer g.lock.Unlock()\n\n\tsourceVertex := g.Vertices[source]\n\tdestinationVertex := g.Vertices[destination]\n\n\tif sourceVertex == nil {\n\t\treturn fmt.Errorf(\"could not find %s: %w\", source, api.ErrNotFound)\n\t}\n\tif destinationVertex == nil {\n\t\treturn fmt.Errorf(\"could not find %s: %w\", destination, api.ErrNotFound)\n\t}\n\n\t// If they are already connected\n\tif _, ok := sourceVertex.Children[destination]; ok {\n\t\treturn nil\n\t}\n\n\tsourceVertex.Children[destination] = destinationVertex\n\tdestinationVertex.Parents[source] = sourceVertex\n\n\treturn nil\n}\n\nfunc leaves(g *Graph) []*Vertex {\n\treturn g.Leaves()\n}\n\n// Leaves returns the slice of leaves of the graph\nfunc (g *Graph) Leaves() []*Vertex {\n\tg.lock.Lock()\n\tdefer g.lock.Unlock()\n\n\tvar res []*Vertex\n\tfor _, v := range g.Vertices {\n\t\tif len(v.Children) == 0 {\n\t\t\tres = append(res, v)\n\t\t}\n\t}\n\n\treturn res\n}\n\nfunc roots(g *Graph) []*Vertex {\n\treturn g.Roots()\n}\n\n// Roots returns the slice of \"Roots\" of the graph\nfunc (g *Graph) Roots() []*Vertex {\n\tg.lock.Lock()\n\tdefer g.lock.Unlock()\n\n\tvar res []*Vertex\n\tfor _, v := range g.Vertices {\n\t\tif len(v.Parents) == 0 {\n\t\t\tres = append(res, v)\n\t\t}\n\t}\n\treturn res\n}\n\n// UpdateStatus updates the status of a certain vertex\nfunc (g *Graph) UpdateStatus(key string, status ServiceStatus) {\n\tg.lock.Lock()\n\tdefer g.lock.Unlock()\n\tg.Vertices[key].Status = status\n}\n\nfunc filterChildren(g *Graph, k string, s ServiceStatus) []*Vertex {\n\treturn g.FilterChildren(k, s)\n}\n\n// FilterChildren returns children of a certain vertex that are in a certain status\nfunc (g *Graph) FilterChildren(key string, status ServiceStatus) []*Vertex {\n\tg.lock.Lock()\n\tdefer g.lock.Unlock()\n\n\tvar res []*Vertex\n\tvertex := g.Vertices[key]\n\n\tfor _, child := range vertex.Children {\n\t\tif child.Status == status {\n\t\t\tres = append(res, child)\n\t\t}\n\t}\n\n\treturn res\n}\n\nfunc filterParents(g *Graph, k string, s ServiceStatus) []*Vertex {\n\treturn g.FilterParents(k, s)\n}\n\n// FilterParents returns the parents of a certain vertex that are in a certain status\nfunc (g *Graph) FilterParents(key string, status ServiceStatus) []*Vertex {\n\tg.lock.Lock()\n\tdefer g.lock.Unlock()\n\n\tvar res []*Vertex\n\tvertex := g.Vertices[key]\n\n\tfor _, parent := range vertex.Parents {\n\t\tif parent.Status == status {\n\t\t\tres = append(res, parent)\n\t\t}\n\t}\n\n\treturn res\n}\n\n// HasCycles detects cycles in the graph\nfunc (g *Graph) HasCycles() (bool, error) {\n\tdiscovered := []string{}\n\tfinished := []string{}\n\n\tfor _, vertex := range g.Vertices {\n\t\tpath := []string{\n\t\t\tvertex.Key,\n\t\t}\n\t\tif !slices.Contains(discovered, vertex.Key) && !slices.Contains(finished, vertex.Key) {\n\t\t\tvar err error\n\t\t\tdiscovered, finished, err = g.visit(vertex.Key, path, discovered, finished)\n\t\t\tif err != nil {\n\t\t\t\treturn true, err\n\t\t\t}\n\t\t}\n\t}\n\n\treturn false, nil\n}\n\nfunc (g *Graph) visit(key string, path []string, discovered []string, finished []string) ([]string, []string, error) {\n\tdiscovered = append(discovered, key)\n\n\tfor _, v := range g.Vertices[key].Children {\n\t\tpath := append(path, v.Key)\n\t\tif slices.Contains(discovered, v.Key) {\n\t\t\treturn nil, nil, fmt.Errorf(\"cycle found: %s\", strings.Join(path, \" -> \"))\n\t\t}\n\n\t\tif !slices.Contains(finished, v.Key) {\n\t\t\tif _, _, err := g.visit(v.Key, path, discovered, finished); err != nil {\n\t\t\t\treturn nil, nil, err\n\t\t\t}\n\t\t}\n\t}\n\n\tdiscovered = remove(discovered, key)\n\tfinished = append(finished, key)\n\treturn discovered, finished, nil\n}\n\nfunc remove(slice []string, item string) []string {\n\tvar s []string\n\tfor _, i := range slice {\n\t\tif i != item {\n\t\t\ts = append(s, i)\n\t\t}\n\t}\n\treturn s\n}\n"
  },
  {
    "path": "pkg/compose/dependencies_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"sort\"\n\t\"sync\"\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\ttestify \"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/require\"\n\t\"gotest.tools/v3/assert\"\n\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\nfunc createTestProject() *types.Project {\n\treturn &types.Project{\n\t\tServices: types.Services{\n\t\t\t\"test1\": {\n\t\t\t\tName: \"test1\",\n\t\t\t\tDependsOn: map[string]types.ServiceDependency{\n\t\t\t\t\t\"test2\": {},\n\t\t\t\t},\n\t\t\t},\n\t\t\t\"test2\": {\n\t\t\t\tName: \"test2\",\n\t\t\t\tDependsOn: map[string]types.ServiceDependency{\n\t\t\t\t\t\"test3\": {},\n\t\t\t\t},\n\t\t\t},\n\t\t\t\"test3\": {\n\t\t\t\tName: \"test3\",\n\t\t\t},\n\t\t},\n\t}\n}\n\nfunc TestTraversalWithMultipleParents(t *testing.T) {\n\tdependent := types.ServiceConfig{\n\t\tName:      \"dependent\",\n\t\tDependsOn: make(types.DependsOnConfig),\n\t}\n\n\tproject := types.Project{\n\t\tServices: types.Services{\"dependent\": dependent},\n\t}\n\n\tfor i := 1; i <= 100; i++ {\n\t\tname := fmt.Sprintf(\"svc_%d\", i)\n\t\tdependent.DependsOn[name] = types.ServiceDependency{}\n\n\t\tsvc := types.ServiceConfig{Name: name}\n\t\tproject.Services[name] = svc\n\t}\n\n\tsvc := make(chan string, 10)\n\tseen := make(map[string]int)\n\tdone := make(chan struct{})\n\tgo func() {\n\t\tfor service := range svc {\n\t\t\tseen[service]++\n\t\t}\n\t\tdone <- struct{}{}\n\t}()\n\n\terr := InDependencyOrder(t.Context(), &project, func(ctx context.Context, service string) error {\n\t\tsvc <- service\n\t\treturn nil\n\t})\n\trequire.NoError(t, err, \"Error during iteration\")\n\tclose(svc)\n\t<-done\n\n\ttestify.Len(t, seen, 101)\n\tfor svc, count := range seen {\n\t\tassert.Equal(t, 1, count, \"Service: %s\", svc)\n\t}\n}\n\nfunc TestInDependencyUpCommandOrder(t *testing.T) {\n\tvar order []string\n\terr := InDependencyOrder(t.Context(), createTestProject(), func(ctx context.Context, service string) error {\n\t\torder = append(order, service)\n\t\treturn nil\n\t})\n\trequire.NoError(t, err, \"Error during iteration\")\n\trequire.Equal(t, []string{\"test3\", \"test2\", \"test1\"}, order)\n}\n\nfunc TestInDependencyReverseDownCommandOrder(t *testing.T) {\n\tvar order []string\n\terr := InReverseDependencyOrder(t.Context(), createTestProject(), func(ctx context.Context, service string) error {\n\t\torder = append(order, service)\n\t\treturn nil\n\t})\n\trequire.NoError(t, err, \"Error during iteration\")\n\trequire.Equal(t, []string{\"test1\", \"test2\", \"test3\"}, order)\n}\n\nfunc TestBuildGraph(t *testing.T) {\n\ttestCases := []struct {\n\t\tdesc             string\n\t\tservices         types.Services\n\t\texpectedVertices map[string]*Vertex\n\t}{\n\t\t{\n\t\t\tdesc: \"builds graph with single service\",\n\t\t\tservices: types.Services{\n\t\t\t\t\"test\": {\n\t\t\t\t\tName:      \"test\",\n\t\t\t\t\tDependsOn: types.DependsOnConfig{},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpectedVertices: map[string]*Vertex{\n\t\t\t\t\"test\": {\n\t\t\t\t\tKey:      \"test\",\n\t\t\t\t\tService:  \"test\",\n\t\t\t\t\tStatus:   ServiceStopped,\n\t\t\t\t\tChildren: map[string]*Vertex{},\n\t\t\t\t\tParents:  map[string]*Vertex{},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tdesc: \"builds graph with two separate services\",\n\t\t\tservices: types.Services{\n\t\t\t\t\"test\": {\n\t\t\t\t\tName:      \"test\",\n\t\t\t\t\tDependsOn: types.DependsOnConfig{},\n\t\t\t\t},\n\t\t\t\t\"another\": {\n\t\t\t\t\tName:      \"another\",\n\t\t\t\t\tDependsOn: types.DependsOnConfig{},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpectedVertices: map[string]*Vertex{\n\t\t\t\t\"test\": {\n\t\t\t\t\tKey:      \"test\",\n\t\t\t\t\tService:  \"test\",\n\t\t\t\t\tStatus:   ServiceStopped,\n\t\t\t\t\tChildren: map[string]*Vertex{},\n\t\t\t\t\tParents:  map[string]*Vertex{},\n\t\t\t\t},\n\t\t\t\t\"another\": {\n\t\t\t\t\tKey:      \"another\",\n\t\t\t\t\tService:  \"another\",\n\t\t\t\t\tStatus:   ServiceStopped,\n\t\t\t\t\tChildren: map[string]*Vertex{},\n\t\t\t\t\tParents:  map[string]*Vertex{},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tdesc: \"builds graph with a service and a dependency\",\n\t\t\tservices: types.Services{\n\t\t\t\t\"test\": {\n\t\t\t\t\tName: \"test\",\n\t\t\t\t\tDependsOn: types.DependsOnConfig{\n\t\t\t\t\t\t\"another\": types.ServiceDependency{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\"another\": {\n\t\t\t\t\tName:      \"another\",\n\t\t\t\t\tDependsOn: types.DependsOnConfig{},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpectedVertices: map[string]*Vertex{\n\t\t\t\t\"test\": {\n\t\t\t\t\tKey:     \"test\",\n\t\t\t\t\tService: \"test\",\n\t\t\t\t\tStatus:  ServiceStopped,\n\t\t\t\t\tChildren: map[string]*Vertex{\n\t\t\t\t\t\t\"another\": {},\n\t\t\t\t\t},\n\t\t\t\t\tParents: map[string]*Vertex{},\n\t\t\t\t},\n\t\t\t\t\"another\": {\n\t\t\t\t\tKey:      \"another\",\n\t\t\t\t\tService:  \"another\",\n\t\t\t\t\tStatus:   ServiceStopped,\n\t\t\t\t\tChildren: map[string]*Vertex{},\n\t\t\t\t\tParents: map[string]*Vertex{\n\t\t\t\t\t\t\"test\": {},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tdesc: \"builds graph with multiple dependency levels\",\n\t\t\tservices: types.Services{\n\t\t\t\t\"test\": {\n\t\t\t\t\tName: \"test\",\n\t\t\t\t\tDependsOn: types.DependsOnConfig{\n\t\t\t\t\t\t\"another\": types.ServiceDependency{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\"another\": {\n\t\t\t\t\tName: \"another\",\n\t\t\t\t\tDependsOn: types.DependsOnConfig{\n\t\t\t\t\t\t\"another_dep\": types.ServiceDependency{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\"another_dep\": {\n\t\t\t\t\tName:      \"another_dep\",\n\t\t\t\t\tDependsOn: types.DependsOnConfig{},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpectedVertices: map[string]*Vertex{\n\t\t\t\t\"test\": {\n\t\t\t\t\tKey:     \"test\",\n\t\t\t\t\tService: \"test\",\n\t\t\t\t\tStatus:  ServiceStopped,\n\t\t\t\t\tChildren: map[string]*Vertex{\n\t\t\t\t\t\t\"another\": {},\n\t\t\t\t\t},\n\t\t\t\t\tParents: map[string]*Vertex{},\n\t\t\t\t},\n\t\t\t\t\"another\": {\n\t\t\t\t\tKey:     \"another\",\n\t\t\t\t\tService: \"another\",\n\t\t\t\t\tStatus:  ServiceStopped,\n\t\t\t\t\tChildren: map[string]*Vertex{\n\t\t\t\t\t\t\"another_dep\": {},\n\t\t\t\t\t},\n\t\t\t\t\tParents: map[string]*Vertex{\n\t\t\t\t\t\t\"test\": {},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\"another_dep\": {\n\t\t\t\t\tKey:      \"another_dep\",\n\t\t\t\t\tService:  \"another_dep\",\n\t\t\t\t\tStatus:   ServiceStopped,\n\t\t\t\t\tChildren: map[string]*Vertex{},\n\t\t\t\t\tParents: map[string]*Vertex{\n\t\t\t\t\t\t\"another\": {},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tC := range testCases {\n\t\tt.Run(tC.desc, func(t *testing.T) {\n\t\t\tproject := types.Project{\n\t\t\t\tServices: tC.services,\n\t\t\t}\n\n\t\t\tgraph, err := NewGraph(&project, ServiceStopped)\n\t\t\tassert.NilError(t, err, fmt.Sprintf(\"failed to build graph for: %s\", tC.desc))\n\n\t\t\tfor k, vertex := range graph.Vertices {\n\t\t\t\texpected, ok := tC.expectedVertices[k]\n\t\t\t\tassert.Equal(t, true, ok)\n\t\t\t\tassert.Equal(t, true, isVertexEqual(*expected, *vertex))\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestBuildGraphDependsOn(t *testing.T) {\n\ttestCases := []struct {\n\t\tdesc             string\n\t\tservices         types.Services\n\t\texpectedVertices map[string]*Vertex\n\t}{\n\t\t{\n\t\t\tdesc: \"service depends on init container which is already removed\",\n\t\t\tservices: types.Services{\n\t\t\t\t\"test\": {\n\t\t\t\t\tName: \"test\",\n\t\t\t\t\tDependsOn: types.DependsOnConfig{\n\t\t\t\t\t\t\"test-removed-init-container\": types.ServiceDependency{\n\t\t\t\t\t\t\tCondition:  \"service_completed_successfully\",\n\t\t\t\t\t\t\tRestart:    false,\n\t\t\t\t\t\t\tExtensions: types.Extensions(nil),\n\t\t\t\t\t\t\tRequired:   false,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texpectedVertices: map[string]*Vertex{\n\t\t\t\t\"test\": {\n\t\t\t\t\tKey:      \"test\",\n\t\t\t\t\tService:  \"test\",\n\t\t\t\t\tStatus:   ServiceStopped,\n\t\t\t\t\tChildren: map[string]*Vertex{},\n\t\t\t\t\tParents:  map[string]*Vertex{},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tC := range testCases {\n\t\tt.Run(tC.desc, func(t *testing.T) {\n\t\t\tproject := types.Project{\n\t\t\t\tServices: tC.services,\n\t\t\t}\n\n\t\t\tgraph, err := NewGraph(&project, ServiceStopped)\n\t\t\tassert.NilError(t, err, fmt.Sprintf(\"failed to build graph for: %s\", tC.desc))\n\n\t\t\tfor k, vertex := range graph.Vertices {\n\t\t\t\texpected, ok := tC.expectedVertices[k]\n\t\t\t\tassert.Equal(t, true, ok)\n\t\t\t\tassert.Equal(t, true, isVertexEqual(*expected, *vertex))\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc isVertexEqual(a, b Vertex) bool {\n\tchildrenEquality := true\n\tfor c := range a.Children {\n\t\tif _, ok := b.Children[c]; !ok {\n\t\t\tchildrenEquality = false\n\t\t}\n\t}\n\tparentEquality := true\n\tfor p := range a.Parents {\n\t\tif _, ok := b.Parents[p]; !ok {\n\t\t\tparentEquality = false\n\t\t}\n\t}\n\treturn a.Key == b.Key &&\n\t\ta.Service == b.Service &&\n\t\tchildrenEquality &&\n\t\tparentEquality\n}\n\nfunc TestWith_RootNodesAndUp(t *testing.T) {\n\tgraph := &Graph{\n\t\tlock:     sync.RWMutex{},\n\t\tVertices: map[string]*Vertex{},\n\t}\n\n\t/** graph topology:\n\t           A   B\n\t\t      / \\ / \\\n\t\t     G   C   E\n\t\t          \\ /\n\t\t           D\n\t\t           |\n\t\t           F\n\t*/\n\n\tgraph.AddVertex(\"A\", \"A\", 0)\n\tgraph.AddVertex(\"B\", \"B\", 0)\n\tgraph.AddVertex(\"C\", \"C\", 0)\n\tgraph.AddVertex(\"D\", \"D\", 0)\n\tgraph.AddVertex(\"E\", \"E\", 0)\n\tgraph.AddVertex(\"F\", \"F\", 0)\n\tgraph.AddVertex(\"G\", \"G\", 0)\n\n\t_ = graph.AddEdge(\"C\", \"A\")\n\t_ = graph.AddEdge(\"C\", \"B\")\n\t_ = graph.AddEdge(\"E\", \"B\")\n\t_ = graph.AddEdge(\"D\", \"C\")\n\t_ = graph.AddEdge(\"D\", \"E\")\n\t_ = graph.AddEdge(\"F\", \"D\")\n\t_ = graph.AddEdge(\"G\", \"A\")\n\n\ttests := []struct {\n\t\tname  string\n\t\tnodes []string\n\t\twant  []string\n\t}{\n\t\t{\n\t\t\tname:  \"whole graph\",\n\t\t\tnodes: []string{\"A\", \"B\"},\n\t\t\twant:  []string{\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\"},\n\t\t},\n\t\t{\n\t\t\tname:  \"only leaves\",\n\t\t\tnodes: []string{\"F\", \"G\"},\n\t\t\twant:  []string{\"F\", \"G\"},\n\t\t},\n\t\t{\n\t\t\tname:  \"simple dependent\",\n\t\t\tnodes: []string{\"D\"},\n\t\t\twant:  []string{\"D\", \"F\"},\n\t\t},\n\t\t{\n\t\t\tname:  \"diamond dependents\",\n\t\t\tnodes: []string{\"B\"},\n\t\t\twant:  []string{\"B\", \"C\", \"D\", \"E\", \"F\"},\n\t\t},\n\t\t{\n\t\t\tname:  \"partial graph\",\n\t\t\tnodes: []string{\"A\"},\n\t\t\twant:  []string{\"A\", \"C\", \"D\", \"F\", \"G\"},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tmx := sync.Mutex{}\n\t\t\texpected := utils.Set[string]{}\n\t\t\texpected.AddAll(\"C\", \"G\", \"D\", \"F\")\n\t\t\tvar visited []string\n\n\t\t\tgt := downDirectionTraversal(func(ctx context.Context, s string) error {\n\t\t\t\tmx.Lock()\n\t\t\t\tdefer mx.Unlock()\n\t\t\t\tvisited = append(visited, s)\n\t\t\t\treturn nil\n\t\t\t})\n\t\t\tWithRootNodesAndDown(tt.nodes)(gt)\n\t\t\terr := gt.visit(t.Context(), graph)\n\t\t\tassert.NilError(t, err)\n\t\t\tsort.Strings(visited)\n\t\t\tassert.DeepEqual(t, tt.want, visited)\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/desktop.go",
    "content": "/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"strings\"\n\n\t\"github.com/moby/moby/client\"\n)\n\n// engineLabelDesktopAddress is used to detect that Compose is running with a\n// Docker Desktop context. When this label is present, the value is an endpoint\n// address for an in-memory socket (AF_UNIX or named pipe).\nconst engineLabelDesktopAddress = \"com.docker.desktop.address\"\n\nfunc (s *composeService) isDesktopIntegrationActive(ctx context.Context) (bool, error) {\n\tres, err := s.apiClient().Info(ctx, client.InfoOptions{})\n\tif err != nil {\n\t\treturn false, err\n\t}\n\tfor _, l := range res.Info.Labels {\n\t\tk, _, ok := strings.Cut(l, \"=\")\n\t\tif ok && k == engineLabelDesktopAddress {\n\t\t\treturn true, nil\n\t\t}\n\t}\n\treturn false, nil\n}\n"
  },
  {
    "path": "pkg/compose/docker_cli_providers.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"github.com/docker/cli/cli/command\"\n)\n\n// dockerCliContextInfo implements api.ContextInfo using Docker CLI\ntype dockerCliContextInfo struct {\n\tcli command.Cli\n}\n\nfunc (c *dockerCliContextInfo) CurrentContext() string {\n\treturn c.cli.CurrentContext()\n}\n\nfunc (c *dockerCliContextInfo) ServerOSType() string {\n\treturn c.cli.ServerInfo().OSType\n}\n\nfunc (c *dockerCliContextInfo) BuildKitEnabled() (bool, error) {\n\treturn c.cli.BuildKitEnabled()\n}\n"
  },
  {
    "path": "pkg/compose/down.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/errdefs\"\n\tcontainerType \"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/sirupsen/logrus\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\ntype downOp func() error\n\nfunc (s *composeService) Down(ctx context.Context, projectName string, options api.DownOptions) error {\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.down(ctx, strings.ToLower(projectName), options)\n\t}, \"down\", s.events)\n}\n\nfunc (s *composeService) down(ctx context.Context, projectName string, options api.DownOptions) error { //nolint:gocyclo\n\tresourceToRemove := false\n\n\tinclude := oneOffExclude\n\tif options.RemoveOrphans {\n\t\tinclude = oneOffInclude\n\t}\n\tcontainers, err := s.getContainers(ctx, projectName, include, true)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject := options.Project\n\tif project == nil {\n\t\tproject, err = s.getProjectWithResources(ctx, containers, projectName)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// Check requested services exists in model\n\tservices, err := checkSelectedServices(options, project)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif len(options.Services) > 0 && len(services) == 0 {\n\t\tlogrus.Infof(\"Any of the services %v not running in project %q\", options.Services, projectName)\n\t\treturn nil\n\t}\n\n\toptions.Services = services\n\n\tif len(containers) > 0 {\n\t\tresourceToRemove = true\n\t}\n\n\terr = InReverseDependencyOrder(ctx, project, func(c context.Context, service string) error {\n\t\tserv := project.Services[service]\n\t\tif serv.Provider != nil {\n\t\t\treturn s.runPlugin(ctx, project, serv, \"down\")\n\t\t}\n\t\tserviceContainers := containers.filter(isService(service))\n\t\terr := s.removeContainers(ctx, serviceContainers, &serv, options.Timeout, options.Volumes)\n\t\treturn err\n\t}, WithRootNodesAndDown(options.Services))\n\tif err != nil {\n\t\treturn err\n\t}\n\n\torphans := containers.filter(isOrphaned(project))\n\tif options.RemoveOrphans && len(orphans) > 0 {\n\t\terr := s.removeContainers(ctx, orphans, nil, options.Timeout, false)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tops := s.ensureNetworksDown(ctx, project)\n\n\tif options.Images != \"\" {\n\t\timgOps, err := s.ensureImagesDown(ctx, project, options)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tops = append(ops, imgOps...)\n\t}\n\n\tif options.Volumes {\n\t\tops = append(ops, s.ensureVolumesDown(ctx, project)...)\n\t}\n\n\tif !resourceToRemove && len(ops) == 0 {\n\t\tlogrus.Warnf(\"Warning: No resource found to remove for project %q.\", projectName)\n\t}\n\n\teg, ctx := errgroup.WithContext(ctx)\n\tfor _, op := range ops {\n\t\teg.Go(op)\n\t}\n\treturn eg.Wait()\n}\n\nfunc checkSelectedServices(options api.DownOptions, project *types.Project) ([]string, error) {\n\tvar services []string\n\tfor _, service := range options.Services {\n\t\t_, err := project.GetService(service)\n\t\tif err != nil {\n\t\t\tif options.Project != nil {\n\t\t\t\t// ran with an explicit compose.yaml file, so we should not ignore\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\t// ran without an explicit compose.yaml file, so can't distinguish typo vs container already removed\n\t\t} else {\n\t\t\tservices = append(services, service)\n\t\t}\n\t}\n\treturn services, nil\n}\n\nfunc (s *composeService) ensureVolumesDown(ctx context.Context, project *types.Project) []downOp {\n\tvar ops []downOp\n\tfor _, vol := range project.Volumes {\n\t\tif vol.External {\n\t\t\tcontinue\n\t\t}\n\t\tvolumeName := vol.Name\n\t\tops = append(ops, func() error {\n\t\t\treturn s.removeVolume(ctx, volumeName)\n\t\t})\n\t}\n\n\treturn ops\n}\n\nfunc (s *composeService) ensureImagesDown(ctx context.Context, project *types.Project, options api.DownOptions) ([]downOp, error) {\n\timagePruner := NewImagePruner(s.apiClient(), project)\n\tpruneOpts := ImagePruneOptions{\n\t\tMode:          ImagePruneMode(options.Images),\n\t\tRemoveOrphans: options.RemoveOrphans,\n\t}\n\timages, err := imagePruner.ImagesToPrune(ctx, pruneOpts)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar ops []downOp\n\tfor i := range images {\n\t\timg := images[i]\n\t\tops = append(ops, func() error {\n\t\t\treturn s.removeImage(ctx, img)\n\t\t})\n\t}\n\treturn ops, nil\n}\n\nfunc (s *composeService) ensureNetworksDown(ctx context.Context, project *types.Project) []downOp {\n\tvar ops []downOp\n\tfor key, n := range project.Networks {\n\t\tif n.External {\n\t\t\tcontinue\n\t\t}\n\t\t// loop capture variable for op closure\n\t\tnetworkKey := key\n\t\tidOrName := n.Name\n\t\tops = append(ops, func() error {\n\t\t\treturn s.removeNetwork(ctx, networkKey, project.Name, idOrName)\n\t\t})\n\t}\n\treturn ops\n}\n\nfunc (s *composeService) removeNetwork(ctx context.Context, composeNetworkName string, projectName string, name string) error {\n\tres, err := s.apiClient().NetworkList(ctx, client.NetworkListOptions{\n\t\tFilters: projectFilter(projectName).Add(\"label\", networkFilter(composeNetworkName)),\n\t})\n\tif err != nil {\n\t\treturn fmt.Errorf(\"failed to list networks: %w\", err)\n\t}\n\tnetworks := res.Items\n\n\tif len(networks) == 0 {\n\t\treturn nil\n\t}\n\n\teventName := fmt.Sprintf(\"Network %s\", name)\n\ts.events.On(removingEvent(eventName))\n\n\tvar found int\n\tfor _, net := range networks {\n\t\tif net.Name != name {\n\t\t\tcontinue\n\t\t}\n\t\tnwInspect, err := s.apiClient().NetworkInspect(ctx, net.ID, client.NetworkInspectOptions{})\n\t\tif errdefs.IsNotFound(err) {\n\t\t\ts.events.On(newEvent(eventName, api.Warning, \"No resource found to remove\"))\n\t\t\treturn nil\n\t\t}\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tnw := nwInspect.Network\n\t\tif len(nw.Containers) > 0 {\n\t\t\ts.events.On(newEvent(eventName, api.Warning, \"Resource is still in use\"))\n\t\t\tfound++\n\t\t\tcontinue\n\t\t}\n\n\t\tif _, err := s.apiClient().NetworkRemove(ctx, net.ID, client.NetworkRemoveOptions{}); err != nil {\n\t\t\tif errdefs.IsNotFound(err) {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\ts.events.On(errorEvent(eventName, err.Error()))\n\t\t\treturn fmt.Errorf(\"failed to remove network %s: %w\", name, err)\n\t\t}\n\t\ts.events.On(removedEvent(eventName))\n\t\tfound++\n\t}\n\n\tif found == 0 {\n\t\t// in practice, it's extremely unlikely for this to ever occur, as it'd\n\t\t// mean the network was present when we queried at the start of this\n\t\t// method but was then deleted by something else in the interim\n\t\ts.events.On(newEvent(eventName, api.Warning, \"No resource found to remove\"))\n\t\treturn nil\n\t}\n\treturn nil\n}\n\nfunc (s *composeService) removeImage(ctx context.Context, image string) error {\n\tid := fmt.Sprintf(\"Image %s\", image)\n\ts.events.On(newEvent(id, api.Working, \"Removing\"))\n\t_, err := s.apiClient().ImageRemove(ctx, image, client.ImageRemoveOptions{})\n\tif err == nil {\n\t\ts.events.On(newEvent(id, api.Done, \"Removed\"))\n\t\treturn nil\n\t}\n\tif errdefs.IsConflict(err) {\n\t\ts.events.On(newEvent(id, api.Warning, \"Resource is still in use\"))\n\t\treturn nil\n\t}\n\tif errdefs.IsNotFound(err) {\n\t\ts.events.On(newEvent(id, api.Done, \"Warning: No resource found to remove\"))\n\t\treturn nil\n\t}\n\treturn err\n}\n\nfunc (s *composeService) removeVolume(ctx context.Context, id string) error {\n\tresource := fmt.Sprintf(\"Volume %s\", id)\n\n\t_, err := s.apiClient().VolumeInspect(ctx, id, client.VolumeInspectOptions{})\n\tif errdefs.IsNotFound(err) {\n\t\t// Already gone\n\t\treturn nil\n\t}\n\n\ts.events.On(newEvent(resource, api.Working, \"Removing\"))\n\t_, err = s.apiClient().VolumeRemove(ctx, id, client.VolumeRemoveOptions{\n\t\tForce: true,\n\t})\n\tif err == nil {\n\t\ts.events.On(newEvent(resource, api.Done, \"Removed\"))\n\t\treturn nil\n\t}\n\tif errdefs.IsConflict(err) {\n\t\ts.events.On(newEvent(resource, api.Warning, \"Resource is still in use\"))\n\t\treturn nil\n\t}\n\tif errdefs.IsNotFound(err) {\n\t\ts.events.On(newEvent(resource, api.Done, \"Warning: No resource found to remove\"))\n\t\treturn nil\n\t}\n\treturn err\n}\n\nfunc (s *composeService) stopContainer(ctx context.Context, service *types.ServiceConfig, ctr containerType.Summary, timeout *time.Duration, listener api.ContainerEventListener) error {\n\teventName := getContainerProgressName(ctr)\n\ts.events.On(stoppingEvent(eventName))\n\n\tif service != nil {\n\t\tfor _, hook := range service.PreStop {\n\t\t\terr := s.runHook(ctx, ctr, *service, hook, listener)\n\t\t\tif err != nil {\n\t\t\t\t// Ignore errors indicating that some containers were already stopped or removed.\n\t\t\t\tif errdefs.IsNotFound(err) || errdefs.IsConflict(err) {\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\n\t_, err := s.apiClient().ContainerStop(ctx, ctr.ID, client.ContainerStopOptions{\n\t\tTimeout: utils.DurationSecondToInt(timeout),\n\t})\n\tif err != nil {\n\t\ts.events.On(errorEvent(eventName, \"Error while Stopping\"))\n\t\treturn err\n\t}\n\ts.events.On(stoppedEvent(eventName))\n\treturn nil\n}\n\nfunc (s *composeService) stopContainers(ctx context.Context, serv *types.ServiceConfig, containers []containerType.Summary, timeout *time.Duration, listener api.ContainerEventListener) error {\n\teg, ctx := errgroup.WithContext(ctx)\n\tfor _, ctr := range containers {\n\t\teg.Go(func() error {\n\t\t\treturn s.stopContainer(ctx, serv, ctr, timeout, listener)\n\t\t})\n\t}\n\treturn eg.Wait()\n}\n\nfunc (s *composeService) removeContainers(ctx context.Context, containers []containerType.Summary, service *types.ServiceConfig, timeout *time.Duration, volumes bool) error {\n\teg, ctx := errgroup.WithContext(ctx)\n\tfor _, ctr := range containers {\n\t\teg.Go(func() error {\n\t\t\treturn s.stopAndRemoveContainer(ctx, ctr, service, timeout, volumes)\n\t\t})\n\t}\n\treturn eg.Wait()\n}\n\nfunc (s *composeService) stopAndRemoveContainer(ctx context.Context, ctr containerType.Summary, service *types.ServiceConfig, timeout *time.Duration, volumes bool) error {\n\teventName := getContainerProgressName(ctr)\n\terr := s.stopContainer(ctx, service, ctr, timeout, nil)\n\tif errdefs.IsNotFound(err) {\n\t\ts.events.On(removedEvent(eventName))\n\t\treturn nil\n\t}\n\tif err != nil {\n\t\treturn err\n\t}\n\ts.events.On(removingEvent(eventName))\n\t_, err = s.apiClient().ContainerRemove(ctx, ctr.ID, client.ContainerRemoveOptions{\n\t\tForce:         true,\n\t\tRemoveVolumes: volumes,\n\t})\n\tif err != nil && !errdefs.IsNotFound(err) && !errdefs.IsConflict(err) {\n\t\ts.events.On(errorEvent(eventName, \"Error while Removing\"))\n\t\treturn err\n\t}\n\ts.events.On(removedEvent(eventName))\n\treturn nil\n}\n\nfunc (s *composeService) getProjectWithResources(ctx context.Context, containers Containers, projectName string) (*types.Project, error) {\n\tcontainers = containers.filter(isNotOneOff)\n\tp, err := s.projectFromName(containers, projectName)\n\tif err != nil && !api.IsNotFoundError(err) {\n\t\treturn nil, err\n\t}\n\tproject, err := p.WithServicesTransform(func(name string, service types.ServiceConfig) (types.ServiceConfig, error) {\n\t\tfor k := range service.DependsOn {\n\t\t\tif dependency, ok := service.DependsOn[k]; ok {\n\t\t\t\tdependency.Required = false\n\t\t\t\tservice.DependsOn[k] = dependency\n\t\t\t}\n\t\t}\n\t\treturn service, nil\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvolumes, err := s.actualVolumes(ctx, projectName)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tproject.Volumes = volumes\n\n\tnetworks, err := s.actualNetworks(ctx, projectName)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tproject.Networks = networks\n\n\treturn project, nil\n}\n"
  },
  {
    "path": "pkg/compose/down_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/errdefs\"\n\t\"github.com/docker/cli/cli/streams\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/image\"\n\t\"github.com/moby/moby/api/types/network\"\n\t\"github.com/moby/moby/api/types/volume\"\n\t\"github.com/moby/moby/client\"\n\t\"go.uber.org/mock/gomock\"\n\t\"gotest.tools/v3/assert\"\n\n\tcompose \"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/mocks\"\n)\n\nfunc TestDown(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\n\tapi.EXPECT().ContainerList(gomock.Any(), projectFilterListOpt(false)).Return(\n\t\tclient.ContainerListResult{Items: []container.Summary{\n\t\t\ttestContainer(\"service1\", \"123\", false),\n\t\t\ttestContainer(\"service2\", \"456\", false),\n\t\t\ttestContainer(\"service2\", \"789\", false),\n\t\t\ttestContainer(\"service_orphan\", \"321\", true),\n\t\t}}, nil)\n\tapi.EXPECT().VolumeList(\n\t\tgomock.Any(),\n\t\tclient.VolumeListOptions{\n\t\t\tFilters: projectFilter(strings.ToLower(testProject)),\n\t\t}).\n\t\tReturn(client.VolumeListResult{}, nil)\n\n\t// network names are not guaranteed to be unique, ensure Compose handles\n\t// cleanup properly if duplicates are inadvertently created\n\tapi.EXPECT().NetworkList(gomock.Any(), client.NetworkListOptions{Filters: projectFilter(strings.ToLower(testProject))}).\n\t\tReturn(client.NetworkListResult{Items: []network.Summary{\n\t\t\t{Network: network.Network{ID: \"abc123\", Name: \"myProject_default\", Labels: map[string]string{compose.NetworkLabel: \"default\"}}},\n\t\t\t{Network: network.Network{ID: \"def456\", Name: \"myProject_default\", Labels: map[string]string{compose.NetworkLabel: \"default\"}}},\n\t\t}}, nil)\n\n\tstopOptions := client.ContainerStopOptions{}\n\tapi.EXPECT().ContainerStop(gomock.Any(), \"123\", stopOptions).Return(client.ContainerStopResult{}, nil)\n\tapi.EXPECT().ContainerStop(gomock.Any(), \"456\", stopOptions).Return(client.ContainerStopResult{}, nil)\n\tapi.EXPECT().ContainerStop(gomock.Any(), \"789\", stopOptions).Return(client.ContainerStopResult{}, nil)\n\n\tapi.EXPECT().ContainerRemove(gomock.Any(), \"123\", client.ContainerRemoveOptions{Force: true}).Return(client.ContainerRemoveResult{}, nil)\n\tapi.EXPECT().ContainerRemove(gomock.Any(), \"456\", client.ContainerRemoveOptions{Force: true}).Return(client.ContainerRemoveResult{}, nil)\n\tapi.EXPECT().ContainerRemove(gomock.Any(), \"789\", client.ContainerRemoveOptions{Force: true}).Return(client.ContainerRemoveResult{}, nil)\n\n\tapi.EXPECT().NetworkList(gomock.Any(), client.NetworkListOptions{\n\t\tFilters: projectFilter(strings.ToLower(testProject)).Add(\"label\", networkFilter(\"default\")),\n\t}).Return(client.NetworkListResult{Items: []network.Summary{\n\t\t{Network: network.Network{ID: \"abc123\", Name: \"myProject_default\"}},\n\t\t{Network: network.Network{ID: \"def456\", Name: \"myProject_default\"}},\n\t}}, nil)\n\tapi.EXPECT().NetworkInspect(gomock.Any(), \"abc123\", gomock.Any()).Return(client.NetworkInspectResult{\n\t\tNetwork: network.Inspect{Network: network.Network{ID: \"abc123\"}},\n\t}, nil)\n\tapi.EXPECT().NetworkInspect(gomock.Any(), \"def456\", gomock.Any()).Return(client.NetworkInspectResult{\n\t\tNetwork: network.Inspect{Network: network.Network{ID: \"def456\"}},\n\t}, nil)\n\tapi.EXPECT().NetworkRemove(gomock.Any(), \"abc123\", gomock.Any()).Return(client.NetworkRemoveResult{}, nil)\n\tapi.EXPECT().NetworkRemove(gomock.Any(), \"def456\", gomock.Any()).Return(client.NetworkRemoveResult{}, nil)\n\n\terr = tested.Down(t.Context(), strings.ToLower(testProject), compose.DownOptions{})\n\tassert.NilError(t, err)\n}\n\nfunc TestDownWithGivenServices(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\n\tapi.EXPECT().ContainerList(gomock.Any(), projectFilterListOpt(false)).Return(client.ContainerListResult{\n\t\tItems: []container.Summary{\n\t\t\ttestContainer(\"service1\", \"123\", false),\n\t\t\ttestContainer(\"service2\", \"456\", false),\n\t\t\ttestContainer(\"service2\", \"789\", false),\n\t\t\ttestContainer(\"service_orphan\", \"321\", true),\n\t\t},\n\t}, nil)\n\tapi.EXPECT().VolumeList(\n\t\tgomock.Any(),\n\t\tclient.VolumeListOptions{\n\t\t\tFilters: projectFilter(strings.ToLower(testProject)),\n\t\t}).\n\t\tReturn(client.VolumeListResult{}, nil)\n\n\t// network names are not guaranteed to be unique, ensure Compose handles\n\t// cleanup properly if duplicates are inadvertently created\n\tapi.EXPECT().NetworkList(gomock.Any(), client.NetworkListOptions{Filters: projectFilter(strings.ToLower(testProject))}).\n\t\tReturn(client.NetworkListResult{Items: []network.Summary{\n\t\t\t{Network: network.Network{ID: \"abc123\", Name: \"myProject_default\", Labels: map[string]string{compose.NetworkLabel: \"default\"}}},\n\t\t\t{Network: network.Network{ID: \"def456\", Name: \"myProject_default\", Labels: map[string]string{compose.NetworkLabel: \"default\"}}},\n\t\t}}, nil)\n\n\tapi.EXPECT().ContainerStop(gomock.Any(), \"123\", client.ContainerStopOptions{}).Return(client.ContainerStopResult{}, nil)\n\n\tapi.EXPECT().ContainerRemove(gomock.Any(), \"123\", client.ContainerRemoveOptions{Force: true}).Return(client.ContainerRemoveResult{}, nil)\n\n\tapi.EXPECT().NetworkList(gomock.Any(), client.NetworkListOptions{\n\t\tFilters: projectFilter(strings.ToLower(testProject)).Add(\"label\", networkFilter(\"default\")),\n\t}).Return(client.NetworkListResult{Items: []network.Summary{\n\t\t{Network: network.Network{ID: \"abc123\", Name: \"myProject_default\"}},\n\t}}, nil)\n\tapi.EXPECT().NetworkInspect(gomock.Any(), \"abc123\", gomock.Any()).Return(client.NetworkInspectResult{Network: network.Inspect{Network: network.Network{ID: \"abc123\"}}}, nil)\n\tapi.EXPECT().NetworkRemove(gomock.Any(), \"abc123\", gomock.Any()).Return(client.NetworkRemoveResult{}, nil)\n\n\terr = tested.Down(t.Context(), strings.ToLower(testProject), compose.DownOptions{\n\t\tServices: []string{\"service1\", \"not-running-service\"},\n\t})\n\tassert.NilError(t, err)\n}\n\nfunc TestDownWithSpecifiedServiceButTheServicesAreNotRunning(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\n\tapi.EXPECT().ContainerList(gomock.Any(), projectFilterListOpt(false)).Return(client.ContainerListResult{\n\t\tItems: []container.Summary{\n\t\t\ttestContainer(\"service1\", \"123\", false),\n\t\t\ttestContainer(\"service2\", \"456\", false),\n\t\t\ttestContainer(\"service2\", \"789\", false),\n\t\t\ttestContainer(\"service_orphan\", \"321\", true),\n\t\t},\n\t}, nil)\n\tapi.EXPECT().VolumeList(\n\t\tgomock.Any(),\n\t\tclient.VolumeListOptions{\n\t\t\tFilters: projectFilter(strings.ToLower(testProject)),\n\t\t}).\n\t\tReturn(client.VolumeListResult{}, nil)\n\n\t// network names are not guaranteed to be unique, ensure Compose handles\n\t// cleanup properly if duplicates are inadvertently created\n\tapi.EXPECT().NetworkList(gomock.Any(), client.NetworkListOptions{Filters: projectFilter(strings.ToLower(testProject))}).\n\t\tReturn(client.NetworkListResult{Items: []network.Summary{\n\t\t\t{Network: network.Network{ID: \"abc123\", Name: \"myProject_default\", Labels: map[string]string{compose.NetworkLabel: \"default\"}}},\n\t\t\t{Network: network.Network{ID: \"def456\", Name: \"myProject_default\", Labels: map[string]string{compose.NetworkLabel: \"default\"}}},\n\t\t}}, nil)\n\n\terr = tested.Down(t.Context(), strings.ToLower(testProject), compose.DownOptions{\n\t\tServices: []string{\"not-running-service1\", \"not-running-service2\"},\n\t})\n\tassert.NilError(t, err)\n}\n\nfunc TestDownRemoveOrphans(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\n\tapi.EXPECT().ContainerList(gomock.Any(), projectFilterListOpt(true)).Return(\n\t\tclient.ContainerListResult{\n\t\t\tItems: []container.Summary{\n\t\t\t\ttestContainer(\"service1\", \"123\", false),\n\t\t\t\ttestContainer(\"service2\", \"789\", false),\n\t\t\t\ttestContainer(\"service_orphan\", \"321\", true),\n\t\t\t},\n\t\t}, nil)\n\tapi.EXPECT().VolumeList(\n\t\tgomock.Any(),\n\t\tclient.VolumeListOptions{\n\t\t\tFilters: projectFilter(strings.ToLower(testProject)),\n\t\t}).\n\t\tReturn(client.VolumeListResult{}, nil)\n\tapi.EXPECT().NetworkList(gomock.Any(), client.NetworkListOptions{Filters: projectFilter(strings.ToLower(testProject))}).\n\t\tReturn(client.NetworkListResult{\n\t\t\tItems: []network.Summary{{\n\t\t\t\tNetwork: network.Network{\n\t\t\t\t\tName:   \"myProject_default\",\n\t\t\t\t\tLabels: map[string]string{compose.NetworkLabel: \"default\"},\n\t\t\t\t},\n\t\t\t}},\n\t\t}, nil)\n\n\tstopOptions := client.ContainerStopOptions{}\n\tapi.EXPECT().ContainerStop(gomock.Any(), \"123\", stopOptions).Return(client.ContainerStopResult{}, nil)\n\tapi.EXPECT().ContainerStop(gomock.Any(), \"789\", stopOptions).Return(client.ContainerStopResult{}, nil)\n\tapi.EXPECT().ContainerStop(gomock.Any(), \"321\", stopOptions).Return(client.ContainerStopResult{}, nil)\n\n\tapi.EXPECT().ContainerRemove(gomock.Any(), \"123\", client.ContainerRemoveOptions{Force: true}).Return(client.ContainerRemoveResult{}, nil)\n\tapi.EXPECT().ContainerRemove(gomock.Any(), \"789\", client.ContainerRemoveOptions{Force: true}).Return(client.ContainerRemoveResult{}, nil)\n\tapi.EXPECT().ContainerRemove(gomock.Any(), \"321\", client.ContainerRemoveOptions{Force: true}).Return(client.ContainerRemoveResult{}, nil)\n\n\tapi.EXPECT().NetworkList(gomock.Any(), client.NetworkListOptions{\n\t\tFilters: projectFilter(strings.ToLower(testProject)).Add(\"label\", networkFilter(\"default\")),\n\t}).Return(client.NetworkListResult{\n\t\tItems: []network.Summary{{Network: network.Network{ID: \"abc123\", Name: \"myProject_default\"}}},\n\t}, nil)\n\tapi.EXPECT().NetworkInspect(gomock.Any(), \"abc123\", gomock.Any()).Return(client.NetworkInspectResult{\n\t\tNetwork: network.Inspect{Network: network.Network{ID: \"abc123\"}},\n\t}, nil)\n\tapi.EXPECT().NetworkRemove(gomock.Any(), \"abc123\", gomock.Any()).Return(client.NetworkRemoveResult{}, nil)\n\n\terr = tested.Down(t.Context(), strings.ToLower(testProject), compose.DownOptions{RemoveOrphans: true})\n\tassert.NilError(t, err)\n}\n\nfunc TestDownRemoveVolumes(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\n\tapi.EXPECT().ContainerList(gomock.Any(), projectFilterListOpt(false)).Return(\n\t\tclient.ContainerListResult{\n\t\t\tItems: []container.Summary{testContainer(\"service1\", \"123\", false)},\n\t\t}, nil)\n\tapi.EXPECT().VolumeList(\n\t\tgomock.Any(),\n\t\tclient.VolumeListOptions{\n\t\t\tFilters: projectFilter(strings.ToLower(testProject)),\n\t\t}).\n\t\tReturn(client.VolumeListResult{\n\t\t\tItems: []volume.Volume{{Name: \"myProject_volume\"}},\n\t\t}, nil)\n\tapi.EXPECT().VolumeInspect(gomock.Any(), \"myProject_volume\", gomock.Any()).\n\t\tReturn(client.VolumeInspectResult{}, nil)\n\tapi.EXPECT().NetworkList(gomock.Any(), client.NetworkListOptions{Filters: projectFilter(strings.ToLower(testProject))}).\n\t\tReturn(client.NetworkListResult{}, nil)\n\n\tapi.EXPECT().ContainerStop(gomock.Any(), \"123\", client.ContainerStopOptions{}).Return(client.ContainerStopResult{}, nil)\n\tapi.EXPECT().ContainerRemove(gomock.Any(), \"123\", client.ContainerRemoveOptions{Force: true, RemoveVolumes: true}).Return(client.ContainerRemoveResult{}, nil)\n\n\tapi.EXPECT().VolumeRemove(gomock.Any(), \"myProject_volume\", client.VolumeRemoveOptions{Force: true}).Return(client.VolumeRemoveResult{}, nil)\n\n\terr = tested.Down(t.Context(), strings.ToLower(testProject), compose.DownOptions{Volumes: true})\n\tassert.NilError(t, err)\n}\n\nfunc TestDownRemoveImages(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\topts := compose.DownOptions{\n\t\tProject: &types.Project{\n\t\t\tName: strings.ToLower(testProject),\n\t\t\tServices: types.Services{\n\t\t\t\t\"local-anonymous\":     {Name: \"local-anonymous\"},\n\t\t\t\t\"local-named\":         {Name: \"local-named\", Image: \"local-named-image\"},\n\t\t\t\t\"remote\":              {Name: \"remote\", Image: \"remote-image\"},\n\t\t\t\t\"remote-tagged\":       {Name: \"remote-tagged\", Image: \"registry.example.com/remote-image-tagged:v1.0\"},\n\t\t\t\t\"no-images-anonymous\": {Name: \"no-images-anonymous\"},\n\t\t\t\t\"no-images-named\":     {Name: \"no-images-named\", Image: \"missing-named-image\"},\n\t\t\t},\n\t\t},\n\t}\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\n\tapi.EXPECT().ContainerList(gomock.Any(), projectFilterListOpt(false)).\n\t\tReturn(client.ContainerListResult{\n\t\t\tItems: []container.Summary{\n\t\t\t\ttestContainer(\"service1\", \"123\", false),\n\t\t\t},\n\t\t}, nil).\n\t\tAnyTimes()\n\n\tapi.EXPECT().ImageList(gomock.Any(), client.ImageListOptions{\n\t\tFilters: projectFilter(strings.ToLower(testProject)).Add(\"dangling\", \"false\"),\n\t}).Return(client.ImageListResult{Items: []image.Summary{\n\t\t{\n\t\t\tLabels:   types.Labels{compose.ServiceLabel: \"local-anonymous\"},\n\t\t\tRepoTags: []string{\"testproject-local-anonymous:latest\"},\n\t\t},\n\t\t{\n\t\t\tLabels:   types.Labels{compose.ServiceLabel: \"local-named\"},\n\t\t\tRepoTags: []string{\"local-named-image:latest\"},\n\t\t},\n\t}}, nil).AnyTimes()\n\n\timagesToBeInspected := map[string]bool{\n\t\t\"testproject-local-anonymous\":     true,\n\t\t\"local-named-image\":               true,\n\t\t\"remote-image\":                    true,\n\t\t\"testproject-no-images-anonymous\": false,\n\t\t\"missing-named-image\":             false,\n\t}\n\tfor img, exists := range imagesToBeInspected {\n\t\tvar resp image.InspectResponse\n\t\tvar err error\n\t\tif exists {\n\t\t\tresp.RepoTags = []string{img}\n\t\t} else {\n\t\t\terr = errdefs.ErrNotFound.WithMessage(fmt.Sprintf(\"test specified that image %q should not exist\", img))\n\t\t}\n\n\t\tapi.EXPECT().ImageInspect(gomock.Any(), img).\n\t\t\tReturn(client.ImageInspectResult{InspectResponse: resp}, err).\n\t\t\tAnyTimes()\n\t}\n\n\tapi.EXPECT().ImageInspect(gomock.Any(), \"registry.example.com/remote-image-tagged:v1.0\").\n\t\tReturn(client.ImageInspectResult{InspectResponse: image.InspectResponse{RepoTags: []string{\"registry.example.com/remote-image-tagged:v1.0\"}}}, nil).\n\t\tAnyTimes()\n\n\tlocalImagesToBeRemoved := []string{\n\t\t\"testproject-local-anonymous:latest\",\n\t\t\"local-named-image:latest\",\n\t}\n\tfor _, img := range localImagesToBeRemoved {\n\t\t// test calls down --rmi=local then down --rmi=all, so local images\n\t\t// get \"removed\" 2x, while other images are only 1x\n\t\tapi.EXPECT().ImageRemove(gomock.Any(), img, client.ImageRemoveOptions{}).\n\t\t\tReturn(client.ImageRemoveResult{}, nil).\n\t\t\tTimes(2)\n\t}\n\n\tt.Log(\"-> docker compose down --rmi=local\")\n\topts.Images = \"local\"\n\terr = tested.Down(t.Context(), strings.ToLower(testProject), opts)\n\tassert.NilError(t, err)\n\n\totherImagesToBeRemoved := []string{\n\t\t\"remote-image:latest\",\n\t\t\"registry.example.com/remote-image-tagged:v1.0\",\n\t}\n\tfor _, img := range otherImagesToBeRemoved {\n\t\tapi.EXPECT().ImageRemove(gomock.Any(), img, client.ImageRemoveOptions{}).\n\t\t\tReturn(client.ImageRemoveResult{}, nil).\n\t\t\tTimes(1)\n\t}\n\n\tt.Log(\"-> docker compose down --rmi=all\")\n\topts.Images = \"all\"\n\terr = tested.Down(t.Context(), strings.ToLower(testProject), opts)\n\tassert.NilError(t, err)\n}\n\nfunc TestDownRemoveImages_NoLabel(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\n\tctr := testContainer(\"service1\", \"123\", false)\n\n\tapi.EXPECT().ContainerList(gomock.Any(), projectFilterListOpt(false)).Return(\n\t\tclient.ContainerListResult{\n\t\t\tItems: []container.Summary{ctr},\n\t\t}, nil)\n\n\tapi.EXPECT().VolumeList(\n\t\tgomock.Any(),\n\t\tclient.VolumeListOptions{\n\t\t\tFilters: projectFilter(strings.ToLower(testProject)),\n\t\t}).\n\t\tReturn(client.VolumeListResult{\n\t\t\tItems: []volume.Volume{{Name: \"myProject_volume\"}},\n\t\t}, nil)\n\tapi.EXPECT().NetworkList(gomock.Any(), client.NetworkListOptions{Filters: projectFilter(strings.ToLower(testProject))}).\n\t\tReturn(client.NetworkListResult{}, nil)\n\n\t// ImageList returns no images for the project since they were unlabeled\n\t// (created by an older version of Compose)\n\tapi.EXPECT().ImageList(gomock.Any(), client.ImageListOptions{\n\t\tFilters: projectFilter(strings.ToLower(testProject)).Add(\"dangling\", \"false\"),\n\t}).Return(client.ImageListResult{}, nil)\n\n\tapi.EXPECT().ImageInspect(gomock.Any(), \"testproject-service1\", gomock.Any()).Return(client.ImageInspectResult{}, nil)\n\tapi.EXPECT().ContainerStop(gomock.Any(), \"123\", client.ContainerStopOptions{}).Return(client.ContainerStopResult{}, nil)\n\tapi.EXPECT().ContainerRemove(gomock.Any(), \"123\", client.ContainerRemoveOptions{Force: true}).Return(client.ContainerRemoveResult{}, nil)\n\n\tapi.EXPECT().ImageRemove(gomock.Any(), \"testproject-service1:latest\", client.ImageRemoveOptions{}).Return(client.ImageRemoveResult{}, nil)\n\n\terr = tested.Down(t.Context(), strings.ToLower(testProject), compose.DownOptions{Images: \"local\"})\n\tassert.NilError(t, err)\n}\n\nfunc prepareMocks(mockCtrl *gomock.Controller) (*mocks.MockAPIClient, *mocks.MockCli) {\n\tapi := mocks.NewMockAPIClient(mockCtrl)\n\tcli := mocks.NewMockCli(mockCtrl)\n\tcli.EXPECT().Client().Return(api).AnyTimes()\n\tcli.EXPECT().Err().Return(streams.NewOut(os.Stderr)).AnyTimes()\n\tcli.EXPECT().Out().Return(streams.NewOut(os.Stdout)).AnyTimes()\n\treturn api, cli\n}\n"
  },
  {
    "path": "pkg/compose/envresolver.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"runtime\"\n\t\"strings\"\n)\n\n// isCaseInsensitiveEnvVars is true on platforms where environment variable names are treated case-insensitively.\nvar isCaseInsensitiveEnvVars = (runtime.GOOS == \"windows\")\n\n// envResolver returns resolver for environment variables suitable for the current platform.\n// Expected to be used with `MappingWithEquals.Resolve`.\n// Updates in `environment` may not be reflected.\nfunc envResolver(environment map[string]string) func(string) (string, bool) {\n\treturn envResolverWithCase(environment, isCaseInsensitiveEnvVars)\n}\n\n// envResolverWithCase returns resolver for environment variables with the specified case-sensitive condition.\n// Expected to be used with `MappingWithEquals.Resolve`.\n// Updates in `environment` may not be reflected.\nfunc envResolverWithCase(environment map[string]string, caseInsensitive bool) func(string) (string, bool) {\n\tif environment == nil {\n\t\treturn func(s string) (string, bool) {\n\t\t\treturn \"\", false\n\t\t}\n\t}\n\tif !caseInsensitive {\n\t\treturn func(s string) (string, bool) {\n\t\t\tv, ok := environment[s]\n\t\t\treturn v, ok\n\t\t}\n\t}\n\t// variable names must be treated case-insensitively.\n\t// Resolves in this way:\n\t// * Return the value if its name matches with the passed name case-sensitively.\n\t// * Otherwise, return the value if its lower-cased name matches lower-cased passed name.\n\t//     * The value is indefinite if multiple variable matches.\n\tloweredEnvironment := make(map[string]string, len(environment))\n\tfor k, v := range environment {\n\t\tloweredEnvironment[strings.ToLower(k)] = v\n\t}\n\treturn func(s string) (string, bool) {\n\t\tv, ok := environment[s]\n\t\tif ok {\n\t\t\treturn v, ok\n\t\t}\n\t\tv, ok = loweredEnvironment[strings.ToLower(s)]\n\t\treturn v, ok\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/envresolver_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc Test_EnvResolverWithCase(t *testing.T) {\n\ttests := []struct {\n\t\tname            string\n\t\tenvironment     map[string]string\n\t\tcaseInsensitive bool\n\t\tsearch          string\n\t\texpectedValue   string\n\t\texpectedOk      bool\n\t}{\n\t\t{\n\t\t\tname: \"case sensitive/case match\",\n\t\t\tenvironment: map[string]string{\n\t\t\t\t\"Env1\": \"Value1\",\n\t\t\t\t\"Env2\": \"Value2\",\n\t\t\t},\n\t\t\tcaseInsensitive: false,\n\t\t\tsearch:          \"Env1\",\n\t\t\texpectedValue:   \"Value1\",\n\t\t\texpectedOk:      true,\n\t\t},\n\t\t{\n\t\t\tname: \"case sensitive/case unmatch\",\n\t\t\tenvironment: map[string]string{\n\t\t\t\t\"Env1\": \"Value1\",\n\t\t\t\t\"Env2\": \"Value2\",\n\t\t\t},\n\t\t\tcaseInsensitive: false,\n\t\t\tsearch:          \"ENV1\",\n\t\t\texpectedValue:   \"\",\n\t\t\texpectedOk:      false,\n\t\t},\n\t\t{\n\t\t\tname:            \"case sensitive/nil environment\",\n\t\t\tenvironment:     nil,\n\t\t\tcaseInsensitive: false,\n\t\t\tsearch:          \"Env1\",\n\t\t\texpectedValue:   \"\",\n\t\t\texpectedOk:      false,\n\t\t},\n\t\t{\n\t\t\tname: \"case insensitive/case match\",\n\t\t\tenvironment: map[string]string{\n\t\t\t\t\"Env1\": \"Value1\",\n\t\t\t\t\"Env2\": \"Value2\",\n\t\t\t},\n\t\t\tcaseInsensitive: true,\n\t\t\tsearch:          \"Env1\",\n\t\t\texpectedValue:   \"Value1\",\n\t\t\texpectedOk:      true,\n\t\t},\n\t\t{\n\t\t\tname: \"case insensitive/case unmatch\",\n\t\t\tenvironment: map[string]string{\n\t\t\t\t\"Env1\": \"Value1\",\n\t\t\t\t\"Env2\": \"Value2\",\n\t\t\t},\n\t\t\tcaseInsensitive: true,\n\t\t\tsearch:          \"ENV1\",\n\t\t\texpectedValue:   \"Value1\",\n\t\t\texpectedOk:      true,\n\t\t},\n\t\t{\n\t\t\tname: \"case insensitive/unmatch\",\n\t\t\tenvironment: map[string]string{\n\t\t\t\t\"Env1\": \"Value1\",\n\t\t\t\t\"Env2\": \"Value2\",\n\t\t\t},\n\t\t\tcaseInsensitive: true,\n\t\t\tsearch:          \"Env3\",\n\t\t\texpectedValue:   \"\",\n\t\t\texpectedOk:      false,\n\t\t},\n\t\t{\n\t\t\tname:            \"case insensitive/nil environment\",\n\t\t\tenvironment:     nil,\n\t\t\tcaseInsensitive: true,\n\t\t\tsearch:          \"Env1\",\n\t\t\texpectedValue:   \"\",\n\t\t\texpectedOk:      false,\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tf := envResolverWithCase(test.environment, test.caseInsensitive)\n\t\t\tv, ok := f(test.search)\n\t\t\tassert.Equal(t, v, test.expectedValue)\n\t\t\tassert.Equal(t, ok, test.expectedOk)\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/events.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"slices\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/moby/moby/client\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Events(ctx context.Context, projectName string, options api.EventsOptions) error {\n\tprojectName = strings.ToLower(projectName)\n\tres := s.apiClient().Events(ctx, client.EventsListOptions{\n\t\tFilters: projectFilter(projectName),\n\t\tSince:   options.Since,\n\t\tUntil:   options.Until,\n\t})\n\tfor {\n\t\tselect {\n\t\tcase event := <-res.Messages:\n\t\t\t// TODO: support other event types\n\t\t\tif event.Type != \"container\" {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tif event.Actor.Attributes[api.OneoffLabel] == \"True\" {\n\t\t\t\t// ignore\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tservice := event.Actor.Attributes[api.ServiceLabel]\n\t\t\tif len(options.Services) > 0 && !slices.Contains(options.Services, service) {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tattributes := map[string]string{}\n\t\t\tfor k, v := range event.Actor.Attributes {\n\t\t\t\tif strings.HasPrefix(k, \"com.docker.compose.\") {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tattributes[k] = v\n\t\t\t}\n\n\t\t\ttimestamp := time.Unix(event.Time, 0)\n\t\t\tif event.TimeNano != 0 {\n\t\t\t\ttimestamp = time.Unix(0, event.TimeNano)\n\t\t\t}\n\t\t\terr := options.Consumer(api.Event{\n\t\t\t\tTimestamp:  timestamp,\n\t\t\t\tService:    service,\n\t\t\t\tContainer:  event.Actor.ID,\n\t\t\t\tStatus:     string(event.Action),\n\t\t\t\tAttributes: attributes,\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\tcase err := <-res.Err:\n\t\t\treturn err\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/exec.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"strings\"\n\n\t\"github.com/docker/cli/cli\"\n\t\"github.com/docker/cli/cli/command/container\"\n\tcontainerType \"github.com/moby/moby/api/types/container\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Exec(ctx context.Context, projectName string, options api.RunOptions) (int, error) {\n\tprojectName = strings.ToLower(projectName)\n\ttarget, err := s.getExecTarget(ctx, projectName, options)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\n\texec := container.NewExecOptions()\n\texec.Interactive = options.Interactive\n\texec.TTY = options.Tty\n\texec.Detach = options.Detach\n\texec.User = options.User\n\texec.Privileged = options.Privileged\n\texec.Workdir = options.WorkingDir\n\texec.Command = options.Command\n\tfor _, v := range options.Environment {\n\t\terr := exec.Env.Set(v)\n\t\tif err != nil {\n\t\t\treturn 0, err\n\t\t}\n\t}\n\n\terr = container.RunExec(ctx, s.dockerCli, target.ID, exec)\n\tvar sterr cli.StatusError\n\tif errors.As(err, &sterr) {\n\t\treturn sterr.StatusCode, err\n\t}\n\treturn 0, err\n}\n\nfunc (s *composeService) getExecTarget(ctx context.Context, projectName string, opts api.RunOptions) (containerType.Summary, error) {\n\treturn s.getSpecifiedContainer(ctx, projectName, oneOffInclude, false, opts.Service, opts.Index)\n}\n"
  },
  {
    "path": "pkg/compose/export.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"strings\"\n\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/moby/sys/atomicwriter\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Export(ctx context.Context, projectName string, options api.ExportOptions) error {\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.export(ctx, projectName, options)\n\t}, \"export\", s.events)\n}\n\nfunc (s *composeService) export(ctx context.Context, projectName string, options api.ExportOptions) error {\n\tprojectName = strings.ToLower(projectName)\n\n\tcontainer, err := s.getSpecifiedContainer(ctx, projectName, oneOffInclude, false, options.Service, options.Index)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif options.Output == \"\" {\n\t\tif s.stdout().IsTerminal() {\n\t\t\treturn fmt.Errorf(\"output option is required when exporting to terminal\")\n\t\t}\n\t} else if err := command.ValidateOutputPath(options.Output); err != nil {\n\t\treturn fmt.Errorf(\"failed to export container: %w\", err)\n\t}\n\n\tname := getCanonicalContainerName(container)\n\ts.events.On(api.Resource{\n\t\tID:     name,\n\t\tText:   api.StatusExporting,\n\t\tStatus: api.Working,\n\t})\n\n\tresponseBody, err := s.apiClient().ContainerExport(ctx, container.ID, client.ContainerExportOptions{})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tdefer func() {\n\t\tif err := responseBody.Close(); err != nil {\n\t\t\ts.events.On(errorEventf(name, \"Failed to close response body: %s\", err.Error()))\n\t\t}\n\t}()\n\n\tif !s.dryRun {\n\t\tif options.Output == \"\" {\n\t\t\t_, err := io.Copy(s.stdout(), responseBody)\n\t\t\treturn err\n\t\t} else {\n\t\t\twriter, err := atomicwriter.New(options.Output, 0o600)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tdefer func() { _ = writer.Close() }()\n\n\t\t\t_, err = io.Copy(writer, responseBody)\n\t\t\treturn err\n\t\t}\n\t}\n\n\ts.events.On(api.Resource{\n\t\tID:     name,\n\t\tText:   api.StatusExported,\n\t\tStatus: api.Done,\n\t})\n\n\treturn nil\n}\n"
  },
  {
    "path": "pkg/compose/filters.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/moby/moby/client\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc projectFilter(projectName string) client.Filters {\n\treturn make(client.Filters).Add(\"label\", fmt.Sprintf(\"%s=%s\", api.ProjectLabel, projectName))\n}\n\nfunc serviceFilter(serviceName string) string {\n\treturn fmt.Sprintf(\"%s=%s\", api.ServiceLabel, serviceName)\n}\n\nfunc networkFilter(name string) string {\n\treturn fmt.Sprintf(\"%s=%s\", api.NetworkLabel, name)\n}\n\nfunc oneOffFilter(b bool) string {\n\tv := \"False\"\n\tif b {\n\t\tv = \"True\"\n\t}\n\treturn fmt.Sprintf(\"%s=%s\", api.OneoffLabel, v)\n}\n\nfunc containerNumberFilter(index int) string {\n\treturn fmt.Sprintf(\"%s=%d\", api.ContainerNumberLabel, index)\n}\n\nfunc hasConfigHashLabel() string {\n\treturn api.ConfigHashLabel\n}\n"
  },
  {
    "path": "pkg/compose/generate.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"maps\"\n\t\"slices\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/mount\"\n\t\"github.com/moby/moby/api/types/network\"\n\t\"github.com/moby/moby/client\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Generate(ctx context.Context, options api.GenerateOptions) (*types.Project, error) {\n\tres, err := s.apiClient().ContainerList(ctx, client.ContainerListOptions{\n\t\tFilters: make(client.Filters).Add(\"name\", options.Containers...),\n\t\tAll:     true,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcontainers := res.Items\n\n\tcontainersByIds, err := s.apiClient().ContainerList(ctx, client.ContainerListOptions{\n\t\tFilters: make(client.Filters).Add(\"id\", options.Containers...),\n\t\tAll:     true,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tfor _, ctr := range containersByIds.Items {\n\t\tif !slices.ContainsFunc(containers, func(summary container.Summary) bool {\n\t\t\treturn summary.ID == ctr.ID\n\t\t}) {\n\t\t\tcontainers = append(containers, ctr)\n\t\t}\n\t}\n\n\tif len(containers) == 0 {\n\t\treturn nil, fmt.Errorf(\"no container(s) found with the following name(s): %s\", strings.Join(options.Containers, \",\"))\n\t}\n\n\treturn s.createProjectFromContainers(containers, options.ProjectName)\n}\n\nfunc (s *composeService) createProjectFromContainers(containers []container.Summary, projectName string) (*types.Project, error) {\n\tproject := &types.Project{}\n\tservices := types.Services{}\n\tnetworks := types.Networks{}\n\tvolumes := types.Volumes{}\n\tsecrets := types.Secrets{}\n\n\tif projectName != \"\" {\n\t\tproject.Name = projectName\n\t}\n\n\tfor _, c := range containers {\n\t\t// if the container is from a previous Compose application, use the existing service name\n\t\tserviceLabel, ok := c.Labels[api.ServiceLabel]\n\t\tif !ok {\n\t\t\tserviceLabel = getCanonicalContainerName(c)\n\t\t}\n\t\tservice, ok := services[serviceLabel]\n\t\tif !ok {\n\t\t\tservice = types.ServiceConfig{\n\t\t\t\tName:   serviceLabel,\n\t\t\t\tImage:  c.Image,\n\t\t\t\tLabels: c.Labels,\n\t\t\t}\n\t\t}\n\t\tservice.Scale = increment(service.Scale)\n\n\t\tinspect, err := s.apiClient().ContainerInspect(context.Background(), c.ID, client.ContainerInspectOptions{})\n\t\tif err != nil {\n\t\t\tservices[serviceLabel] = service\n\t\t\tcontinue\n\t\t}\n\t\ts.extractComposeConfiguration(&service, inspect.Container, volumes, secrets, networks)\n\t\tservice.Labels = cleanDockerPreviousLabels(service.Labels)\n\t\tservices[serviceLabel] = service\n\t}\n\n\tproject.Services = services\n\tproject.Networks = networks\n\tproject.Volumes = volumes\n\tproject.Secrets = secrets\n\treturn project, nil\n}\n\nfunc (s *composeService) extractComposeConfiguration(service *types.ServiceConfig, inspect container.InspectResponse, volumes types.Volumes, secrets types.Secrets, networks types.Networks) {\n\tservice.Environment = types.NewMappingWithEquals(inspect.Config.Env)\n\tif inspect.Config.Healthcheck != nil {\n\t\thealthConfig := inspect.Config.Healthcheck\n\t\tservice.HealthCheck = s.toComposeHealthCheck(healthConfig)\n\t}\n\tif len(inspect.Mounts) > 0 {\n\t\tdetectedVolumes, volumeConfigs, detectedSecrets, secretsConfigs := s.toComposeVolumes(inspect.Mounts)\n\t\tservice.Volumes = append(service.Volumes, volumeConfigs...)\n\t\tservice.Secrets = append(service.Secrets, secretsConfigs...)\n\t\tmaps.Copy(volumes, detectedVolumes)\n\t\tmaps.Copy(secrets, detectedSecrets)\n\t}\n\tif len(inspect.NetworkSettings.Networks) > 0 {\n\t\tdetectedNetworks, networkConfigs := s.toComposeNetwork(inspect.NetworkSettings.Networks)\n\t\tservice.Networks = networkConfigs\n\t\tmaps.Copy(networks, detectedNetworks)\n\t}\n\tif len(inspect.HostConfig.PortBindings) > 0 {\n\t\tfor key, portBindings := range inspect.HostConfig.PortBindings {\n\t\t\tfor _, portBinding := range portBindings {\n\t\t\t\tservice.Ports = append(service.Ports, types.ServicePortConfig{\n\t\t\t\t\tTarget:    uint32(key.Num()),\n\t\t\t\t\tPublished: portBinding.HostPort,\n\t\t\t\t\tProtocol:  string(key.Proto()),\n\t\t\t\t\tHostIP:    portBinding.HostIP.String(),\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc (s *composeService) toComposeHealthCheck(healthConfig *container.HealthConfig) *types.HealthCheckConfig {\n\tvar healthCheck types.HealthCheckConfig\n\thealthCheck.Test = healthConfig.Test\n\tif healthConfig.Timeout != 0 {\n\t\ttimeout := types.Duration(healthConfig.Timeout)\n\t\thealthCheck.Timeout = &timeout\n\t}\n\tif healthConfig.Interval != 0 {\n\t\tinterval := types.Duration(healthConfig.Interval)\n\t\thealthCheck.Interval = &interval\n\t}\n\tif healthConfig.StartPeriod != 0 {\n\t\tstartPeriod := types.Duration(healthConfig.StartPeriod)\n\t\thealthCheck.StartPeriod = &startPeriod\n\t}\n\tif healthConfig.StartInterval != 0 {\n\t\tstartInterval := types.Duration(healthConfig.StartInterval)\n\t\thealthCheck.StartInterval = &startInterval\n\t}\n\tif healthConfig.Retries != 0 {\n\t\tretries := uint64(healthConfig.Retries)\n\t\thealthCheck.Retries = &retries\n\t}\n\treturn &healthCheck\n}\n\nfunc (s *composeService) toComposeVolumes(volumes []container.MountPoint) (map[string]types.VolumeConfig,\n\t[]types.ServiceVolumeConfig, map[string]types.SecretConfig, []types.ServiceSecretConfig,\n) {\n\tvolumeConfigs := make(map[string]types.VolumeConfig)\n\tsecretConfigs := make(map[string]types.SecretConfig)\n\tvar serviceVolumeConfigs []types.ServiceVolumeConfig\n\tvar serviceSecretConfigs []types.ServiceSecretConfig\n\n\tfor _, volume := range volumes {\n\t\tserviceVC := types.ServiceVolumeConfig{\n\t\t\tType:     string(volume.Type),\n\t\t\tSource:   volume.Source,\n\t\t\tTarget:   volume.Destination,\n\t\t\tReadOnly: !volume.RW,\n\t\t}\n\t\tswitch volume.Type {\n\t\tcase mount.TypeVolume:\n\t\t\tserviceVC.Source = volume.Name\n\t\t\tvol := types.VolumeConfig{}\n\t\t\tif volume.Driver != \"local\" {\n\t\t\t\tvol.Driver = volume.Driver\n\t\t\t\tvol.Name = volume.Name\n\t\t\t}\n\t\t\tvolumeConfigs[volume.Name] = vol\n\t\t\tserviceVolumeConfigs = append(serviceVolumeConfigs, serviceVC)\n\t\tcase mount.TypeBind:\n\t\t\tif strings.HasPrefix(volume.Destination, \"/run/secrets\") {\n\t\t\t\tdestination := strings.Split(volume.Destination, \"/\")\n\t\t\t\tsecret := types.SecretConfig{\n\t\t\t\t\tName: destination[len(destination)-1],\n\t\t\t\t\tFile: strings.TrimPrefix(volume.Source, \"/host_mnt\"),\n\t\t\t\t}\n\t\t\t\tsecretConfigs[secret.Name] = secret\n\t\t\t\tserviceSecretConfigs = append(serviceSecretConfigs, types.ServiceSecretConfig{\n\t\t\t\t\tSource: secret.Name,\n\t\t\t\t\tTarget: volume.Destination,\n\t\t\t\t})\n\t\t\t} else {\n\t\t\t\tserviceVolumeConfigs = append(serviceVolumeConfigs, serviceVC)\n\t\t\t}\n\t\t}\n\t}\n\treturn volumeConfigs, serviceVolumeConfigs, secretConfigs, serviceSecretConfigs\n}\n\nfunc (s *composeService) toComposeNetwork(networks map[string]*network.EndpointSettings) (map[string]types.NetworkConfig, map[string]*types.ServiceNetworkConfig) {\n\tnetworkConfigs := make(map[string]types.NetworkConfig)\n\tserviceNetworkConfigs := make(map[string]*types.ServiceNetworkConfig)\n\n\tfor name, net := range networks {\n\t\tinspect, err := s.apiClient().NetworkInspect(context.Background(), name, client.NetworkInspectOptions{})\n\t\tif err != nil {\n\t\t\tnetworkConfigs[name] = types.NetworkConfig{}\n\t\t} else {\n\t\t\tnetworkConfigs[name] = types.NetworkConfig{\n\t\t\t\tInternal: inspect.Network.Internal,\n\t\t\t}\n\t\t}\n\t\tserviceNetworkConfigs[name] = &types.ServiceNetworkConfig{\n\t\t\tAliases: net.Aliases,\n\t\t}\n\t}\n\treturn networkConfigs, serviceNetworkConfigs\n}\n\nfunc cleanDockerPreviousLabels(labels types.Labels) types.Labels {\n\tcleanedLabels := types.Labels{}\n\tfor key, value := range labels {\n\t\tif !strings.HasPrefix(key, \"com.docker.compose.\") && !strings.HasPrefix(key, \"desktop.docker.io\") {\n\t\t\tcleanedLabels[key] = value\n\t\t}\n\t}\n\treturn cleanedLabels\n}\n"
  },
  {
    "path": "pkg/compose/hash.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"encoding/json\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/opencontainers/go-digest\"\n)\n\n// ServiceHash computes the configuration hash for a service.\nfunc ServiceHash(o types.ServiceConfig) (string, error) {\n\t// remove the Build config when generating the service hash\n\to.Build = nil\n\to.PullPolicy = \"\"\n\to.Scale = nil\n\tif o.Deploy != nil {\n\t\to.Deploy.Replicas = nil\n\t}\n\to.DependsOn = nil\n\to.Profiles = nil\n\n\tbytes, err := json.Marshal(o)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn digest.SHA256.FromBytes(bytes).Encoded(), nil\n}\n\n// NetworkHash computes the configuration hash for a network.\nfunc NetworkHash(o *types.NetworkConfig) (string, error) {\n\tbytes, err := json.Marshal(o)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn digest.SHA256.FromBytes(bytes).Encoded(), nil\n}\n\n// VolumeHash computes the configuration hash for a volume.\nfunc VolumeHash(o types.VolumeConfig) (string, error) {\n\tif o.Driver == \"\" { // (TODO: jhrotko) This probably should be fixed in compose-go\n\t\to.Driver = \"local\"\n\t}\n\tbytes, err := json.Marshal(o)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn digest.SHA256.FromBytes(bytes).Encoded(), nil\n}\n"
  },
  {
    "path": "pkg/compose/hash_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc TestServiceHash(t *testing.T) {\n\thash1, err := ServiceHash(serviceConfig(1))\n\tassert.NilError(t, err)\n\thash2, err := ServiceHash(serviceConfig(2))\n\tassert.NilError(t, err)\n\tassert.Equal(t, hash1, hash2)\n}\n\nfunc serviceConfig(replicas int) types.ServiceConfig {\n\treturn types.ServiceConfig{\n\t\tScale: &replicas,\n\t\tDeploy: &types.DeployConfig{\n\t\t\tReplicas: &replicas,\n\t\t},\n\t\tName:  \"foo\",\n\t\tImage: \"bar\",\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/hook.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"time\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/moby/moby/api/pkg/stdcopy\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\nfunc (s composeService) runHook(ctx context.Context, ctr container.Summary, service types.ServiceConfig, hook types.ServiceHook, listener api.ContainerEventListener) error {\n\twOut := utils.GetWriter(func(line string) {\n\t\tlistener(api.ContainerEvent{\n\t\t\tType:    api.HookEventLog,\n\t\t\tSource:  getContainerNameWithoutProject(ctr) + \" ->\",\n\t\t\tID:      ctr.ID,\n\t\t\tService: service.Name,\n\t\t\tLine:    line,\n\t\t})\n\t})\n\tdefer wOut.Close() //nolint:errcheck\n\n\tdetached := listener == nil\n\texec, err := s.apiClient().ExecCreate(ctx, ctr.ID, client.ExecCreateOptions{\n\t\tUser:         hook.User,\n\t\tPrivileged:   hook.Privileged,\n\t\tEnv:          ToMobyEnv(hook.Environment),\n\t\tWorkingDir:   hook.WorkingDir,\n\t\tCmd:          hook.Command,\n\t\tAttachStdout: !detached,\n\t\tAttachStderr: !detached,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif detached {\n\t\treturn s.runWaitExec(ctx, exec.ID, service, listener)\n\t}\n\n\tattachOptions := client.ExecAttachOptions{\n\t\tTTY: service.Tty,\n\t}\n\tif service.Tty {\n\t\theight, width := s.stdout().GetTtySize()\n\t\tattachOptions.ConsoleSize = client.ConsoleSize{\n\t\t\tWidth:  width,\n\t\t\tHeight: height,\n\t\t}\n\t}\n\tattach, err := s.apiClient().ExecAttach(ctx, exec.ID, attachOptions)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer attach.Close()\n\n\tif service.Tty {\n\t\t_, err = io.Copy(wOut, attach.Reader)\n\t} else {\n\t\t_, err = stdcopy.StdCopy(wOut, wOut, attach.Reader)\n\t}\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tinspected, err := s.apiClient().ExecInspect(ctx, exec.ID, client.ExecInspectOptions{})\n\tif err != nil {\n\t\treturn err\n\t}\n\tif inspected.ExitCode != 0 {\n\t\treturn fmt.Errorf(\"%s hook exited with status %d\", service.Name, inspected.ExitCode)\n\t}\n\treturn nil\n}\n\nfunc (s composeService) runWaitExec(ctx context.Context, execID string, service types.ServiceConfig, listener api.ContainerEventListener) error {\n\t_, err := s.apiClient().ExecStart(ctx, execID, client.ExecStartOptions{\n\t\tDetach: listener == nil,\n\t\tTTY:    service.Tty,\n\t})\n\tif err != nil {\n\t\treturn nil\n\t}\n\n\t// We miss a ContainerExecWait API\n\ttick := time.NewTicker(100 * time.Millisecond)\n\tfor {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\treturn nil\n\t\tcase <-tick.C:\n\t\t\tinspect, err := s.apiClient().ExecInspect(ctx, execID, client.ExecInspectOptions{})\n\t\t\tif err != nil {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tif !inspect.Running {\n\t\t\t\tif inspect.ExitCode != 0 {\n\t\t\t\t\treturn fmt.Errorf(\"%s hook exited with status %d\", service.Name, inspect.ExitCode)\n\t\t\t\t}\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/hook_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"net\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/console\"\n\t\"github.com/docker/cli/cli/streams\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"go.uber.org/mock/gomock\"\n\t\"gotest.tools/v3/assert\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/mocks\"\n)\n\n// TestRunHook_ConsoleSize verifies that ConsoleSize is only passed to ExecAttach\n// when the service has TTY enabled. When TTY is disabled, passing a non-zero\n// ConsoleSize causes the Docker daemon to return \"console size is only supported\n// when TTY is enabled\" (regression introduced in v5.1.0).\nfunc TestRunHook_ConsoleSize(t *testing.T) {\n\ttests := []struct {\n\t\tname            string\n\t\ttty             bool\n\t\texpectedConsole client.ConsoleSize\n\t}{\n\t\t{\n\t\t\tname:            \"no tty - ConsoleSize must be zero\",\n\t\t\ttty:             false,\n\t\t\texpectedConsole: client.ConsoleSize{},\n\t\t},\n\t\t{\n\t\t\tname:            \"with tty - ConsoleSize should reflect terminal dimensions\",\n\t\t\ttty:             true,\n\t\t\texpectedConsole: client.ConsoleSize{Width: 80, Height: 24},\n\t\t},\n\t}\n\n\tfor _, tc := range tests {\n\t\tt.Run(tc.name, func(t *testing.T) {\n\t\t\tmockCtrl := gomock.NewController(t)\n\t\t\tdefer mockCtrl.Finish()\n\n\t\t\tmockAPI := mocks.NewMockAPIClient(mockCtrl)\n\t\t\tmockCli := mocks.NewMockCli(mockCtrl)\n\t\t\tmockCli.EXPECT().Client().Return(mockAPI).AnyTimes()\n\t\t\tmockCli.EXPECT().Err().Return(streams.NewOut(os.Stderr)).AnyTimes()\n\n\t\t\t// Create a PTY so GetTtySize() returns real non-zero dimensions,\n\t\t\t// simulating an interactive terminal session.\n\t\t\tpty, slavePath, err := console.NewPty()\n\t\t\tassert.NilError(t, err)\n\t\t\tdefer pty.Close() //nolint:errcheck\n\t\t\tassert.NilError(t, pty.Resize(console.WinSize{Height: 24, Width: 80}))\n\n\t\t\tslaveFile, err := os.OpenFile(slavePath, os.O_RDWR, 0)\n\t\t\tassert.NilError(t, err)\n\t\t\tdefer slaveFile.Close() //nolint:errcheck\n\n\t\t\tmockCli.EXPECT().Out().Return(streams.NewOut(slaveFile)).AnyTimes()\n\n\t\t\tservice := types.ServiceConfig{\n\t\t\t\tName: \"test\",\n\t\t\t\tTty:  tc.tty,\n\t\t\t}\n\t\t\thook := types.ServiceHook{Command: []string{\"echo\", \"hello\"}}\n\t\t\tctr := container.Summary{ID: \"container123\"}\n\n\t\t\tmockAPI.EXPECT().\n\t\t\t\tExecCreate(gomock.Any(), \"container123\", gomock.Any()).\n\t\t\t\tReturn(client.ExecCreateResult{ID: \"exec123\"}, nil)\n\n\t\t\t// Return a pipe that immediately closes so the reader gets EOF.\n\t\t\tserverConn, clientConn := net.Pipe()\n\t\t\tserverConn.Close() //nolint:errcheck\n\t\t\tmockAPI.EXPECT().\n\t\t\t\tExecAttach(gomock.Any(), \"exec123\", client.ExecAttachOptions{\n\t\t\t\t\tTTY:         tc.tty,\n\t\t\t\t\tConsoleSize: tc.expectedConsole,\n\t\t\t\t}).\n\t\t\t\tReturn(client.ExecAttachResult{\n\t\t\t\t\tHijackedResponse: client.NewHijackedResponse(clientConn, \"\"),\n\t\t\t\t}, nil)\n\n\t\t\tmockAPI.EXPECT().\n\t\t\t\tExecInspect(gomock.Any(), \"exec123\", gomock.Any()).\n\t\t\t\tReturn(client.ExecInspectResult{ExitCode: 0}, nil)\n\n\t\t\ts, err := NewComposeService(mockCli)\n\t\t\tassert.NilError(t, err)\n\n\t\t\tnoopListener := func(api.ContainerEvent) {}\n\t\t\terr = s.(*composeService).runHook(t.Context(), ctr, service, hook, noopListener)\n\t\t\tassert.NilError(t, err)\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/image_pruner.go",
    "content": "/*\n   Copyright 2022 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"sort\"\n\t\"sync\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/errdefs\"\n\t\"github.com/distribution/reference\"\n\t\"github.com/moby/moby/api/types/image\"\n\t\"github.com/moby/moby/client\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\n// ImagePruneMode controls how aggressively images associated with the project\n// are removed from the engine.\ntype ImagePruneMode string\n\nconst (\n\t// ImagePruneNone indicates that no project images should be removed.\n\tImagePruneNone ImagePruneMode = \"\"\n\t// ImagePruneLocal indicates that only images built locally by Compose\n\t// should be removed.\n\tImagePruneLocal ImagePruneMode = \"local\"\n\t// ImagePruneAll indicates that all project-associated images, including\n\t// remote images should be removed.\n\tImagePruneAll ImagePruneMode = \"all\"\n)\n\n// ImagePruneOptions controls the behavior of image pruning.\ntype ImagePruneOptions struct {\n\tMode ImagePruneMode\n\n\t// RemoveOrphans will result in the removal of images that were built for\n\t// the project regardless of whether they are for a known service if true.\n\tRemoveOrphans bool\n}\n\n// ImagePruner handles image removal during Compose `down` operations.\ntype ImagePruner struct {\n\tclient  client.ImageAPIClient\n\tproject *types.Project\n}\n\n// NewImagePruner creates an ImagePruner object for a project.\nfunc NewImagePruner(imageClient client.ImageAPIClient, project *types.Project) *ImagePruner {\n\treturn &ImagePruner{\n\t\tclient:  imageClient,\n\t\tproject: project,\n\t}\n}\n\n// ImagesToPrune returns the set of images that should be removed.\nfunc (p *ImagePruner) ImagesToPrune(ctx context.Context, opts ImagePruneOptions) ([]string, error) {\n\tif opts.Mode == ImagePruneNone {\n\t\treturn nil, nil\n\t} else if opts.Mode != ImagePruneLocal && opts.Mode != ImagePruneAll {\n\t\treturn nil, fmt.Errorf(\"unsupported image prune mode: %s\", opts.Mode)\n\t}\n\tvar images []string\n\n\tif opts.Mode == ImagePruneAll {\n\t\tnamedImages, err := p.namedImages(ctx)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\timages = append(images, namedImages...)\n\t}\n\n\tprojectImages, err := p.labeledLocalImages(ctx)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tfor _, img := range projectImages {\n\t\tif len(img.RepoTags) == 0 {\n\t\t\t// currently, we're only pruning the tagged references, but\n\t\t\t// if we start removing the dangling images and grouping by\n\t\t\t// service, we can remove this (and should rely on `Image::ID`)\n\t\t\tcontinue\n\t\t}\n\n\t\tvar shouldPrune bool\n\t\tif opts.RemoveOrphans {\n\t\t\t// indiscriminately prune all project images even if they're not\n\t\t\t// referenced by the current Compose state (e.g. the service was\n\t\t\t// removed from YAML)\n\t\t\tshouldPrune = true\n\t\t} else {\n\t\t\t// only prune the image if it belongs to a known service for the project.\n\t\t\tif _, err := p.project.GetService(img.Labels[api.ServiceLabel]); err == nil {\n\t\t\t\tshouldPrune = true\n\t\t\t}\n\t\t}\n\n\t\tif shouldPrune {\n\t\t\timages = append(images, img.RepoTags[0])\n\t\t}\n\t}\n\n\tfallbackImages, err := p.unlabeledLocalImages(ctx)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\timages = append(images, fallbackImages...)\n\n\timages = normalizeAndDedupeImages(images)\n\treturn images, nil\n}\n\n// namedImages are those that are explicitly named in the service config.\n//\n// These could be registry-only images (no local build), hybrid (support build\n// as a fallback if cannot pull), or local-only (image does not exist in a\n// registry).\nfunc (p *ImagePruner) namedImages(ctx context.Context) ([]string, error) {\n\tvar images []string\n\tfor _, service := range p.project.Services {\n\t\tif service.Image == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\timages = append(images, service.Image)\n\t}\n\treturn p.filterImagesByExistence(ctx, images)\n}\n\n// labeledLocalImages are images that were locally-built by a current version of\n// Compose (it did not always label built images).\n//\n// The image name could either have been defined by the user or implicitly\n// created from the project + service name.\nfunc (p *ImagePruner) labeledLocalImages(ctx context.Context) ([]image.Summary, error) {\n\tres, err := p.client.ImageList(ctx, client.ImageListOptions{\n\t\t// TODO(milas): we should really clean up the dangling images as\n\t\t// well (historically we have NOT); need to refactor this to handle\n\t\t// it gracefully without producing confusing CLI output, i.e. we\n\t\t// do not want to print out a bunch of untagged/dangling image IDs,\n\t\t// they should be grouped into a logical operation for the relevant\n\t\t// service\n\t\tFilters: projectFilter(p.project.Name).Add(\"dangling\", \"false\"),\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn res.Items, nil\n}\n\n// unlabeledLocalImages are images that match the implicit naming convention\n// for locally-built images but did not get labeled, presumably because they\n// were produced by an older version of Compose.\n//\n// This is transitional to ensure `down` continues to work as expected on\n// projects built/launched by previous versions of Compose. It can safely\n// be removed after some time.\nfunc (p *ImagePruner) unlabeledLocalImages(ctx context.Context) ([]string, error) {\n\tvar images []string\n\tfor _, service := range p.project.Services {\n\t\tif service.Image != \"\" {\n\t\t\tcontinue\n\t\t}\n\t\timg := api.GetImageNameOrDefault(service, p.project.Name)\n\t\timages = append(images, img)\n\t}\n\treturn p.filterImagesByExistence(ctx, images)\n}\n\n// filterImagesByExistence returns the subset of images that exist in the\n// engine store.\n//\n// NOTE: Any transient errors communicating with the API will result in an\n// image being returned as \"existing\", as this method is exclusively used to\n// find images to remove, so the worst case of being conservative here is an\n// attempt to remove an image that doesn't exist, which will cause a warning\n// but is otherwise harmless.\nfunc (p *ImagePruner) filterImagesByExistence(ctx context.Context, imageNames []string) ([]string, error) {\n\tvar mu sync.Mutex\n\tvar ret []string\n\n\teg, ctx := errgroup.WithContext(ctx)\n\tfor _, img := range imageNames {\n\t\teg.Go(func() error {\n\t\t\t_, err := p.client.ImageInspect(ctx, img)\n\t\t\tif errdefs.IsNotFound(err) {\n\t\t\t\t// err on the side of caution: only skip if we successfully\n\t\t\t\t// queried the API and got back a definitive \"not exists\"\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tmu.Lock()\n\t\t\tdefer mu.Unlock()\n\t\t\tret = append(ret, img)\n\t\t\treturn nil\n\t\t})\n\t}\n\n\tif err := eg.Wait(); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn ret, nil\n}\n\n// normalizeAndDedupeImages returns the unique set of images after normalization.\nfunc normalizeAndDedupeImages(images []string) []string {\n\tseen := make(map[string]struct{}, len(images))\n\tfor _, img := range images {\n\t\t// since some references come from user input (service.image) and some\n\t\t// come from the engine API, we standardize them, opting for the\n\t\t// familiar name format since they'll also be displayed in the CLI\n\t\tref, err := reference.ParseNormalizedNamed(img)\n\t\tif err == nil {\n\t\t\tref = reference.TagNameOnly(ref)\n\t\t\timg = reference.FamiliarString(ref)\n\t\t}\n\t\tseen[img] = struct{}{}\n\t}\n\tret := make([]string, 0, len(seen))\n\tfor v := range seen {\n\t\tret = append(ret, v)\n\t}\n\t// ensure a deterministic return result - the actual ordering is not useful\n\tsort.Strings(ret)\n\treturn ret\n}\n"
  },
  {
    "path": "pkg/compose/images.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"slices\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/containerd/errdefs\"\n\t\"github.com/containerd/platforms\"\n\t\"github.com/distribution/reference\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/moby/moby/client/pkg/versions\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Images(ctx context.Context, projectName string, options api.ImagesOptions) (map[string]api.ImageSummary, error) {\n\tprojectName = strings.ToLower(projectName)\n\tallContainers, err := s.apiClient().ContainerList(ctx, client.ContainerListOptions{\n\t\tAll:     true,\n\t\tFilters: projectFilter(projectName),\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tvar containers []container.Summary\n\tif len(options.Services) > 0 {\n\t\t// filter service containers\n\t\tfor _, c := range allContainers.Items {\n\t\t\tif slices.Contains(options.Services, c.Labels[api.ServiceLabel]) {\n\t\t\t\tcontainers = append(containers, c)\n\t\t\t}\n\t\t}\n\t} else {\n\t\tcontainers = allContainers.Items\n\t}\n\n\tversion, err := s.RuntimeVersion(ctx)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\twithPlatform := versions.GreaterThanOrEqualTo(version, apiVersion149)\n\n\tsummary := map[string]api.ImageSummary{}\n\tvar mux sync.Mutex\n\teg, ctx := errgroup.WithContext(ctx)\n\tfor _, c := range containers {\n\t\teg.Go(func() error {\n\t\t\timage, err := s.apiClient().ImageInspect(ctx, c.Image)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tid := image.ID // platform-specific image ID can't be combined with image tag, see https://github.com/moby/moby/issues/49995\n\n\t\t\tif withPlatform && c.ImageManifestDescriptor != nil && c.ImageManifestDescriptor.Platform != nil {\n\t\t\t\timage, err = s.apiClient().ImageInspect(ctx, c.Image, client.ImageInspectWithPlatform(c.ImageManifestDescriptor.Platform))\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tvar repository, tag string\n\t\t\tref, err := reference.ParseDockerRef(c.Image)\n\t\t\tif err == nil {\n\t\t\t\t// ParseDockerRef will reject a local image ID\n\t\t\t\trepository = reference.FamiliarName(ref)\n\t\t\t\tif tagged, ok := ref.(reference.Tagged); ok {\n\t\t\t\t\ttag = tagged.Tag()\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tvar created *time.Time\n\t\t\tif image.Created != \"\" {\n\t\t\t\tt, err := time.Parse(time.RFC3339Nano, image.Created)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tcreated = &t\n\t\t\t}\n\n\t\t\tmux.Lock()\n\t\t\tdefer mux.Unlock()\n\t\t\tsummary[getCanonicalContainerName(c)] = api.ImageSummary{\n\t\t\t\tID:         id,\n\t\t\t\tRepository: repository,\n\t\t\t\tTag:        tag,\n\t\t\t\tPlatform: platforms.Platform{\n\t\t\t\t\tArchitecture: image.Architecture,\n\t\t\t\t\tOS:           image.Os,\n\t\t\t\t\tOSVersion:    image.OsVersion,\n\t\t\t\t\tVariant:      image.Variant,\n\t\t\t\t},\n\t\t\t\tSize:        image.Size,\n\t\t\t\tCreated:     created,\n\t\t\t\tLastTagTime: image.Metadata.LastTagTime,\n\t\t\t}\n\t\t\treturn nil\n\t\t})\n\t}\n\n\terr = eg.Wait()\n\treturn summary, err\n}\n\nfunc (s *composeService) getImageSummaries(ctx context.Context, repoTags []string) (map[string]api.ImageSummary, error) {\n\tsummary := map[string]api.ImageSummary{}\n\tl := sync.Mutex{}\n\teg, ctx := errgroup.WithContext(ctx)\n\tfor _, repoTag := range repoTags {\n\t\teg.Go(func() error {\n\t\t\tinspect, err := s.apiClient().ImageInspect(ctx, repoTag)\n\t\t\tif err != nil {\n\t\t\t\tif errdefs.IsNotFound(err) {\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\t\t\t\treturn fmt.Errorf(\"unable to get image '%s': %w\", repoTag, err)\n\t\t\t}\n\t\t\ttag := \"\"\n\t\t\trepository := \"\"\n\t\t\tref, err := reference.ParseDockerRef(repoTag)\n\t\t\tif err == nil {\n\t\t\t\t// ParseDockerRef will reject a local image ID\n\t\t\t\trepository = reference.FamiliarName(ref)\n\t\t\t\tif tagged, ok := ref.(reference.Tagged); ok {\n\t\t\t\t\ttag = tagged.Tag()\n\t\t\t\t}\n\t\t\t}\n\t\t\tl.Lock()\n\t\t\tsummary[repoTag] = api.ImageSummary{\n\t\t\t\tID:          inspect.ID,\n\t\t\t\tRepository:  repository,\n\t\t\t\tTag:         tag,\n\t\t\t\tSize:        inspect.Size,\n\t\t\t\tLastTagTime: inspect.Metadata.LastTagTime,\n\t\t\t}\n\t\t\tl.Unlock()\n\t\t\treturn nil\n\t\t})\n\t}\n\treturn summary, eg.Wait()\n}\n"
  },
  {
    "path": "pkg/compose/images_test.go",
    "content": "/*\n   Copyright 2024 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"net/netip\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/image\"\n\t\"github.com/moby/moby/client\"\n\t\"go.uber.org/mock/gomock\"\n\t\"gotest.tools/v3/assert\"\n\n\tcompose \"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc TestImages(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\n\targs := projectFilter(strings.ToLower(testProject))\n\tlistOpts := client.ContainerListOptions{All: true, Filters: args}\n\tapi.EXPECT().ServerVersion(gomock.Any(), gomock.Any()).Return(client.ServerVersionResult{APIVersion: \"1.96\"}, nil).AnyTimes()\n\ttimeStr1 := \"2025-06-06T06:06:06.000000000Z\"\n\tcreated1, _ := time.Parse(time.RFC3339Nano, timeStr1)\n\ttimeStr2 := \"2025-03-03T03:03:03.000000000Z\"\n\tcreated2, _ := time.Parse(time.RFC3339Nano, timeStr2)\n\timage1 := imageInspect(\"image1\", \"foo:1\", 12345, timeStr1)\n\timage2 := imageInspect(\"image2\", \"bar:2\", 67890, timeStr2)\n\tapi.EXPECT().ImageInspect(anyCancellableContext(), \"foo:1\").Return(client.ImageInspectResult{InspectResponse: image1}, nil).MaxTimes(2)\n\tapi.EXPECT().ImageInspect(anyCancellableContext(), \"bar:2\").Return(client.ImageInspectResult{InspectResponse: image2}, nil)\n\tc1 := containerDetail(\"service1\", \"123\", container.StateRunning, \"foo:1\")\n\tc2 := containerDetail(\"service1\", \"456\", container.StateRunning, \"bar:2\")\n\tc2.Ports = []container.PortSummary{{PublicPort: 80, PrivatePort: 90, IP: netip.MustParseAddr(\"127.0.0.1\")}}\n\tc3 := containerDetail(\"service2\", \"789\", container.StateExited, \"foo:1\")\n\tapi.EXPECT().ContainerList(t.Context(), listOpts).Return(client.ContainerListResult{\n\t\tItems: []container.Summary{c1, c2, c3},\n\t}, nil)\n\n\timages, err := tested.Images(t.Context(), strings.ToLower(testProject), compose.ImagesOptions{})\n\n\texpected := map[string]compose.ImageSummary{\n\t\t\"123\": {\n\t\t\tID:         \"image1\",\n\t\t\tRepository: \"foo\",\n\t\t\tTag:        \"1\",\n\t\t\tSize:       12345,\n\t\t\tCreated:    &created1,\n\t\t},\n\t\t\"456\": {\n\t\t\tID:         \"image2\",\n\t\t\tRepository: \"bar\",\n\t\t\tTag:        \"2\",\n\t\t\tSize:       67890,\n\t\t\tCreated:    &created2,\n\t\t},\n\t\t\"789\": {\n\t\t\tID:         \"image1\",\n\t\t\tRepository: \"foo\",\n\t\t\tTag:        \"1\",\n\t\t\tSize:       12345,\n\t\t\tCreated:    &created1,\n\t\t},\n\t}\n\tassert.NilError(t, err)\n\tassert.DeepEqual(t, images, expected)\n}\n\nfunc imageInspect(id string, imageReference string, size int64, created string) image.InspectResponse {\n\treturn image.InspectResponse{\n\t\tID: id,\n\t\tRepoTags: []string{\n\t\t\t\"someRepo:someTag\",\n\t\t\timageReference,\n\t\t},\n\t\tSize:    size,\n\t\tCreated: created,\n\t}\n}\n\nfunc containerDetail(service string, id string, status container.ContainerState, imageName string) container.Summary {\n\treturn container.Summary{\n\t\tID:     id,\n\t\tNames:  []string{\"/\" + id},\n\t\tImage:  imageName,\n\t\tLabels: containerLabels(service, false),\n\t\tState:  status,\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/kill.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"strings\"\n\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Kill(ctx context.Context, projectName string, options api.KillOptions) error {\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.kill(ctx, strings.ToLower(projectName), options)\n\t}, \"kill\", s.events)\n}\n\nfunc (s *composeService) kill(ctx context.Context, projectName string, options api.KillOptions) error {\n\tservices := options.Services\n\n\tvar containers Containers\n\tcontainers, err := s.getContainers(ctx, projectName, oneOffInclude, options.All, services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject := options.Project\n\tif project == nil {\n\t\tproject, err = s.getProjectWithResources(ctx, containers, projectName)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif !options.RemoveOrphans {\n\t\tcontainers = containers.filter(isService(project.ServiceNames()...))\n\t}\n\tif len(containers) == 0 {\n\t\treturn api.ErrNoResources\n\t}\n\n\teg, ctx := errgroup.WithContext(ctx)\n\tcontainers.forEach(func(ctr container.Summary) {\n\t\teg.Go(func() error {\n\t\t\teventName := getContainerProgressName(ctr)\n\t\t\ts.events.On(killingEvent(eventName))\n\t\t\t_, err := s.apiClient().ContainerKill(ctx, ctr.ID, client.ContainerKillOptions{\n\t\t\t\tSignal: options.Signal,\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\ts.events.On(errorEvent(eventName, \"Error while Killing\"))\n\t\t\t\treturn err\n\t\t\t}\n\t\t\ts.events.On(killedEvent(eventName))\n\t\t\treturn nil\n\t\t})\n\t})\n\treturn eg.Wait()\n}\n"
  },
  {
    "path": "pkg/compose/kill_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/network\"\n\t\"github.com/moby/moby/client\"\n\t\"go.uber.org/mock/gomock\"\n\t\"gotest.tools/v3/assert\"\n\n\tcompose \"github.com/docker/compose/v5/pkg/api\"\n)\n\nconst testProject = \"testProject\"\n\nfunc TestKillAll(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\n\tname := strings.ToLower(testProject)\n\n\tapi.EXPECT().ContainerList(t.Context(), client.ContainerListOptions{\n\t\tFilters: projectFilter(name).Add(\"label\", hasConfigHashLabel()),\n\t}).Return(client.ContainerListResult{\n\t\tItems: []container.Summary{\n\t\t\ttestContainer(\"service1\", \"123\", false),\n\t\t\ttestContainer(\"service1\", \"456\", false),\n\t\t\ttestContainer(\"service2\", \"789\", false),\n\t\t},\n\t}, nil)\n\tapi.EXPECT().VolumeList(\n\t\tgomock.Any(),\n\t\tclient.VolumeListOptions{\n\t\t\tFilters: projectFilter(strings.ToLower(testProject)),\n\t\t}).\n\t\tReturn(client.VolumeListResult{}, nil)\n\tapi.EXPECT().NetworkList(gomock.Any(), client.NetworkListOptions{Filters: projectFilter(strings.ToLower(testProject))}).\n\t\tReturn(client.NetworkListResult{\n\t\t\tItems: []network.Summary{{\n\t\t\t\tNetwork: network.Network{ID: \"abc123\", Name: \"testProject_default\"},\n\t\t\t}},\n\t\t}, nil)\n\tapi.EXPECT().ContainerKill(anyCancellableContext(), \"123\", client.ContainerKillOptions{}).Return(client.ContainerKillResult{}, nil)\n\tapi.EXPECT().ContainerKill(anyCancellableContext(), \"456\", client.ContainerKillOptions{}).Return(client.ContainerKillResult{}, nil)\n\tapi.EXPECT().ContainerKill(anyCancellableContext(), \"789\", client.ContainerKillOptions{}).Return(client.ContainerKillResult{}, nil)\n\n\terr = tested.Kill(t.Context(), name, compose.KillOptions{})\n\tassert.NilError(t, err)\n}\n\nfunc TestKillSignal(t *testing.T) {\n\tconst serviceName = \"service1\"\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\n\tname := strings.ToLower(testProject)\n\tlistOptions := client.ContainerListOptions{\n\t\tFilters: projectFilter(name).Add(\"label\", serviceFilter(serviceName), hasConfigHashLabel()),\n\t}\n\n\tapi.EXPECT().ContainerList(t.Context(), listOptions).Return(client.ContainerListResult{\n\t\tItems: []container.Summary{testContainer(serviceName, \"123\", false)},\n\t}, nil)\n\tapi.EXPECT().VolumeList(\n\t\tgomock.Any(),\n\t\tclient.VolumeListOptions{\n\t\t\tFilters: projectFilter(strings.ToLower(testProject)),\n\t\t}).\n\t\tReturn(client.VolumeListResult{}, nil)\n\tapi.EXPECT().NetworkList(gomock.Any(), client.NetworkListOptions{Filters: projectFilter(strings.ToLower(testProject))}).\n\t\tReturn(client.NetworkListResult{\n\t\t\tItems: []network.Summary{{\n\t\t\t\tNetwork: network.Network{ID: \"abc123\", Name: \"testProject_default\"},\n\t\t\t}},\n\t\t}, nil)\n\tapi.EXPECT().ContainerKill(anyCancellableContext(), \"123\", client.ContainerKillOptions{\n\t\tSignal: \"SIGTERM\",\n\t}).Return(client.ContainerKillResult{}, nil)\n\n\terr = tested.Kill(t.Context(), name, compose.KillOptions{Services: []string{serviceName}, Signal: \"SIGTERM\"})\n\tassert.NilError(t, err)\n}\n\nfunc testContainer(service string, id string, oneOff bool) container.Summary {\n\t// canonical docker names in the API start with a leading slash, some\n\t// parts of Compose code will attempt to strip this off, so make sure\n\t// it's consistently present\n\tname := \"/\" + strings.TrimPrefix(id, \"/\")\n\treturn container.Summary{\n\t\tID:     id,\n\t\tNames:  []string{name},\n\t\tLabels: containerLabels(service, oneOff),\n\t\tState:  container.StateExited,\n\t}\n}\n\nfunc containerLabels(service string, oneOff bool) map[string]string {\n\tworkingdir := \"/src/pkg/compose/testdata\"\n\tcomposefile := filepath.Join(workingdir, \"compose.yaml\")\n\tlabels := map[string]string{\n\t\tcompose.ServiceLabel:     service,\n\t\tcompose.ConfigFilesLabel: composefile,\n\t\tcompose.WorkingDirLabel:  workingdir,\n\t\tcompose.ProjectLabel:     strings.ToLower(testProject),\n\t}\n\tif oneOff {\n\t\tlabels[compose.OneoffLabel] = \"True\"\n\t}\n\treturn labels\n}\n\nfunc anyCancellableContext() gomock.Matcher {\n\t//nolint:forbidigo // This creates a context type for gomock matching, not for actual test usage\n\tctxWithCancel, cancel := context.WithCancel(context.Background())\n\tcancel()\n\treturn gomock.AssignableToTypeOf(ctxWithCancel)\n}\n\nfunc projectFilterListOpt(withOneOff bool) client.ContainerListOptions {\n\tfilter := projectFilter(strings.ToLower(testProject)).Add(\"label\", hasConfigHashLabel())\n\tif !withOneOff {\n\t\tfilter.Add(\"label\", oneOffFilter(false))\n\t}\n\treturn client.ContainerListOptions{\n\t\tFilters: filter,\n\t\tAll:     true,\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/loader.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"os\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/cli\"\n\t\"github.com/compose-spec/compose-go/v2/loader\"\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/remote\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\n// LoadProject implements api.Compose.LoadProject\n// It loads and validates a Compose project from configuration files.\nfunc (s *composeService) LoadProject(ctx context.Context, options api.ProjectLoadOptions) (*types.Project, error) {\n\t// Setup remote loaders (Git, OCI)\n\tremoteLoaders := s.createRemoteLoaders(options)\n\n\tprojectOptions, err := s.buildProjectOptions(options, remoteLoaders)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Register all user-provided listeners (e.g., for metrics collection)\n\tfor _, listener := range options.LoadListeners {\n\t\tif listener != nil {\n\t\t\tprojectOptions.WithListeners(listener)\n\t\t}\n\t}\n\n\tif options.Compatibility || utils.StringToBool(projectOptions.Environment[api.ComposeCompatibility]) {\n\t\tapi.Separator = \"_\"\n\t}\n\n\tproject, err := projectOptions.LoadProject(ctx)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Post-processing: service selection, environment resolution, etc.\n\tproject, err = s.postProcessProject(project, options)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn project, nil\n}\n\n// createRemoteLoaders creates Git and OCI remote loaders if not in offline mode\nfunc (s *composeService) createRemoteLoaders(options api.ProjectLoadOptions) []loader.ResourceLoader {\n\tif options.Offline {\n\t\treturn nil\n\t}\n\tgit := remote.NewGitRemoteLoader(s.dockerCli, options.Offline)\n\toci := remote.NewOCIRemoteLoader(s.dockerCli, options.Offline, options.OCI)\n\treturn []loader.ResourceLoader{git, oci}\n}\n\n// buildProjectOptions constructs compose-go ProjectOptions from API options\nfunc (s *composeService) buildProjectOptions(options api.ProjectLoadOptions, remoteLoaders []loader.ResourceLoader) (*cli.ProjectOptions, error) {\n\topts := []cli.ProjectOptionsFn{\n\t\tcli.WithWorkingDirectory(options.WorkingDir),\n\t\tcli.WithOsEnv,\n\t}\n\n\t// Add PWD if not present\n\tif _, present := os.LookupEnv(\"PWD\"); !present {\n\t\tif pwd, err := os.Getwd(); err == nil {\n\t\t\topts = append(opts, cli.WithEnv([]string{\"PWD=\" + pwd}))\n\t\t}\n\t}\n\n\t// Add remote loaders\n\tfor _, r := range remoteLoaders {\n\t\topts = append(opts, cli.WithResourceLoader(r))\n\t}\n\n\topts = append(opts,\n\t\t// Load PWD/.env if present and no explicit --env-file has been set\n\t\tcli.WithEnvFiles(options.EnvFiles...),\n\t\t// read dot env file to populate project environment\n\t\tcli.WithDotEnv,\n\t\t// get compose file path set by COMPOSE_FILE\n\t\tcli.WithConfigFileEnv,\n\t\t// if none was selected, get default compose.yaml file from current dir or parent folder\n\t\tcli.WithDefaultConfigPath,\n\t\t// .. and then, a project directory != PWD maybe has been set so let's load .env file\n\t\tcli.WithEnvFiles(options.EnvFiles...), //nolint:gocritic // intentionally applying cli.WithEnvFiles twice.\n\t\tcli.WithDotEnv,                        //nolint:gocritic // intentionally applying cli.WithDotEnv twice.\n\t\t// eventually COMPOSE_PROFILES should have been set\n\t\tcli.WithDefaultProfiles(options.Profiles...),\n\t\tcli.WithName(options.ProjectName),\n\t)\n\n\treturn cli.NewProjectOptions(options.ConfigPaths, append(options.ProjectOptionsFns, opts...)...)\n}\n\n// postProcessProject applies post-loading transformations to the project\nfunc (s *composeService) postProcessProject(project *types.Project, options api.ProjectLoadOptions) (*types.Project, error) {\n\tif project.Name == \"\" {\n\t\treturn nil, errors.New(\"project name can't be empty. Use ProjectName option to set a valid name\")\n\t}\n\n\tproject, err := project.WithServicesEnabled(options.Services...)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Add custom labels\n\tfor name, s := range project.Services {\n\t\ts.CustomLabels = map[string]string{\n\t\t\tapi.ProjectLabel:     project.Name,\n\t\t\tapi.ServiceLabel:     name,\n\t\t\tapi.VersionLabel:     api.ComposeVersion,\n\t\t\tapi.WorkingDirLabel:  project.WorkingDir,\n\t\t\tapi.ConfigFilesLabel: strings.Join(project.ComposeFiles, \",\"),\n\t\t\tapi.OneoffLabel:      \"False\",\n\t\t}\n\t\tif len(options.EnvFiles) != 0 {\n\t\t\ts.CustomLabels[api.EnvironmentFileLabel] = strings.Join(options.EnvFiles, \",\")\n\t\t}\n\t\tproject.Services[name] = s\n\t}\n\n\tproject, err = project.WithSelectedServices(options.Services)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Remove unnecessary resources if not All\n\tif !options.All {\n\t\tproject = project.WithoutUnnecessaryResources()\n\t}\n\n\treturn project, nil\n}\n"
  },
  {
    "path": "pkg/compose/loader_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/cli\"\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/require\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc TestLoadProject_Basic(t *testing.T) {\n\t// Create a temporary compose file\n\ttmpDir := t.TempDir()\n\tcomposeFile := filepath.Join(tmpDir, \"compose.yaml\")\n\tcomposeContent := `\nname: test-project\nservices:\n  web:\n    image: nginx:latest\n    ports:\n      - \"8080:80\"\n  db:\n    image: postgres:latest\n    environment:\n      POSTGRES_PASSWORD: secret\n`\n\terr := os.WriteFile(composeFile, []byte(composeContent), 0o644)\n\trequire.NoError(t, err)\n\n\tservice, err := NewComposeService(nil)\n\trequire.NoError(t, err)\n\n\t// Load the project\n\tproject, err := service.LoadProject(t.Context(), api.ProjectLoadOptions{\n\t\tConfigPaths: []string{composeFile},\n\t})\n\n\t// Assertions\n\trequire.NoError(t, err)\n\tassert.NotNil(t, project)\n\tassert.Equal(t, \"test-project\", project.Name)\n\tassert.Len(t, project.Services, 2)\n\tassert.Contains(t, project.Services, \"web\")\n\tassert.Contains(t, project.Services, \"db\")\n\n\t// Check labels were applied\n\twebService := project.Services[\"web\"]\n\tassert.Equal(t, \"test-project\", webService.CustomLabels[api.ProjectLabel])\n\tassert.Equal(t, \"web\", webService.CustomLabels[api.ServiceLabel])\n}\n\nfunc TestLoadProject_WithEnvironmentResolution(t *testing.T) {\n\ttmpDir := t.TempDir()\n\tcomposeFile := filepath.Join(tmpDir, \"compose.yaml\")\n\tcomposeContent := `\nservices:\n  app:\n    image: myapp:latest\n    environment:\n      - TEST_VAR=${TEST_VAR}\n      - LITERAL_VAR=literal_value\n`\n\terr := os.WriteFile(composeFile, []byte(composeContent), 0o644)\n\trequire.NoError(t, err)\n\n\t// Set environment variable\n\tt.Setenv(\"TEST_VAR\", \"resolved_value\")\n\n\tservice, err := NewComposeService(nil)\n\trequire.NoError(t, err)\n\n\t// Test with environment resolution (default)\n\tt.Run(\"WithResolution\", func(t *testing.T) {\n\t\tproject, err := service.LoadProject(t.Context(), api.ProjectLoadOptions{\n\t\t\tConfigPaths: []string{composeFile},\n\t\t})\n\t\trequire.NoError(t, err)\n\n\t\tappService := project.Services[\"app\"]\n\t\t// Environment should be resolved\n\t\tassert.NotNil(t, appService.Environment[\"TEST_VAR\"])\n\t\tassert.Equal(t, \"resolved_value\", *appService.Environment[\"TEST_VAR\"])\n\t\tassert.NotNil(t, appService.Environment[\"LITERAL_VAR\"])\n\t\tassert.Equal(t, \"literal_value\", *appService.Environment[\"LITERAL_VAR\"])\n\t})\n\n\t// Test without environment resolution\n\tt.Run(\"WithoutResolution\", func(t *testing.T) {\n\t\tproject, err := service.LoadProject(t.Context(), api.ProjectLoadOptions{\n\t\t\tConfigPaths:       []string{composeFile},\n\t\t\tProjectOptionsFns: []cli.ProjectOptionsFn{cli.WithoutEnvironmentResolution},\n\t\t})\n\t\trequire.NoError(t, err)\n\n\t\tappService := project.Services[\"app\"]\n\t\t// Environment should NOT be resolved, keeping raw values\n\t\t// Note: This depends on compose-go behavior, which may still have some resolution\n\t\tassert.NotNil(t, appService.Environment)\n\t})\n}\n\nfunc TestLoadProject_ServiceSelection(t *testing.T) {\n\ttmpDir := t.TempDir()\n\tcomposeFile := filepath.Join(tmpDir, \"compose.yaml\")\n\tcomposeContent := `\nservices:\n  web:\n    image: nginx:latest\n  db:\n    image: postgres:latest\n  cache:\n    image: redis:latest\n`\n\terr := os.WriteFile(composeFile, []byte(composeContent), 0o644)\n\trequire.NoError(t, err)\n\n\tservice, err := NewComposeService(nil)\n\trequire.NoError(t, err)\n\n\t// Load only specific services\n\tproject, err := service.LoadProject(t.Context(), api.ProjectLoadOptions{\n\t\tConfigPaths: []string{composeFile},\n\t\tServices:    []string{\"web\", \"db\"},\n\t})\n\n\trequire.NoError(t, err)\n\tassert.Len(t, project.Services, 2)\n\tassert.Contains(t, project.Services, \"web\")\n\tassert.Contains(t, project.Services, \"db\")\n\tassert.NotContains(t, project.Services, \"cache\")\n}\n\nfunc TestLoadProject_WithProfiles(t *testing.T) {\n\ttmpDir := t.TempDir()\n\tcomposeFile := filepath.Join(tmpDir, \"compose.yaml\")\n\tcomposeContent := `\nservices:\n  web:\n    image: nginx:latest\n  debug:\n    image: busybox:latest\n    profiles: [\"debug\"]\n`\n\terr := os.WriteFile(composeFile, []byte(composeContent), 0o644)\n\trequire.NoError(t, err)\n\n\tservice, err := NewComposeService(nil)\n\trequire.NoError(t, err)\n\n\t// Without debug profile\n\tt.Run(\"WithoutProfile\", func(t *testing.T) {\n\t\tproject, err := service.LoadProject(t.Context(), api.ProjectLoadOptions{\n\t\t\tConfigPaths: []string{composeFile},\n\t\t})\n\t\trequire.NoError(t, err)\n\t\tassert.Len(t, project.Services, 1)\n\t\tassert.Contains(t, project.Services, \"web\")\n\t})\n\n\t// With debug profile\n\tt.Run(\"WithProfile\", func(t *testing.T) {\n\t\tproject, err := service.LoadProject(t.Context(), api.ProjectLoadOptions{\n\t\t\tConfigPaths: []string{composeFile},\n\t\t\tProfiles:    []string{\"debug\"},\n\t\t})\n\t\trequire.NoError(t, err)\n\t\tassert.Len(t, project.Services, 2)\n\t\tassert.Contains(t, project.Services, \"web\")\n\t\tassert.Contains(t, project.Services, \"debug\")\n\t})\n}\n\nfunc TestLoadProject_WithLoadListeners(t *testing.T) {\n\ttmpDir := t.TempDir()\n\tcomposeFile := filepath.Join(tmpDir, \"compose.yaml\")\n\tcomposeContent := `\nservices:\n  web:\n    image: nginx:latest\n`\n\terr := os.WriteFile(composeFile, []byte(composeContent), 0o644)\n\trequire.NoError(t, err)\n\n\tservice, err := NewComposeService(nil)\n\trequire.NoError(t, err)\n\n\t// Track events received\n\tvar events []string\n\tlistener := func(event string, metadata map[string]any) {\n\t\tevents = append(events, event)\n\t}\n\n\tproject, err := service.LoadProject(t.Context(), api.ProjectLoadOptions{\n\t\tConfigPaths:   []string{composeFile},\n\t\tLoadListeners: []api.LoadListener{listener},\n\t})\n\n\trequire.NoError(t, err)\n\tassert.NotNil(t, project)\n\n\t// Listeners should have been called (exact events depend on compose-go implementation)\n\t// The slice itself is always initialized (non-nil), even if empty\n\t_ = events // events may or may not have entries depending on compose-go behavior\n}\n\nfunc TestLoadProject_ProjectNameInference(t *testing.T) {\n\ttmpDir := t.TempDir()\n\tcomposeFile := filepath.Join(tmpDir, \"compose.yaml\")\n\tcomposeContent := `\nservices:\n  web:\n    image: nginx:latest\n`\n\terr := os.WriteFile(composeFile, []byte(composeContent), 0o644)\n\trequire.NoError(t, err)\n\n\tservice, err := NewComposeService(nil)\n\trequire.NoError(t, err)\n\n\t// Without explicit project name\n\tt.Run(\"InferredName\", func(t *testing.T) {\n\t\tproject, err := service.LoadProject(t.Context(), api.ProjectLoadOptions{\n\t\t\tConfigPaths: []string{composeFile},\n\t\t})\n\t\trequire.NoError(t, err)\n\t\t// Project name should be inferred from directory\n\t\tassert.NotEmpty(t, project.Name)\n\t})\n\n\t// With explicit project name\n\tt.Run(\"ExplicitName\", func(t *testing.T) {\n\t\tproject, err := service.LoadProject(t.Context(), api.ProjectLoadOptions{\n\t\t\tConfigPaths: []string{composeFile},\n\t\t\tProjectName: \"my-custom-project\",\n\t\t})\n\t\trequire.NoError(t, err)\n\t\tassert.Equal(t, \"my-custom-project\", project.Name)\n\t})\n}\n\nfunc TestLoadProject_Compatibility(t *testing.T) {\n\ttmpDir := t.TempDir()\n\tcomposeFile := filepath.Join(tmpDir, \"compose.yaml\")\n\tcomposeContent := `\nservices:\n  web:\n    image: nginx:latest\n`\n\terr := os.WriteFile(composeFile, []byte(composeContent), 0o644)\n\trequire.NoError(t, err)\n\n\tservice, err := NewComposeService(nil)\n\trequire.NoError(t, err)\n\n\t// With compatibility mode\n\tproject, err := service.LoadProject(t.Context(), api.ProjectLoadOptions{\n\t\tConfigPaths:   []string{composeFile},\n\t\tCompatibility: true,\n\t})\n\n\trequire.NoError(t, err)\n\tassert.NotNil(t, project)\n\t// In compatibility mode, separator should be \"_\"\n\tassert.Equal(t, \"_\", api.Separator)\n\n\t// Reset separator\n\tapi.Separator = \"-\"\n}\n\nfunc TestLoadProject_InvalidComposeFile(t *testing.T) {\n\ttmpDir := t.TempDir()\n\tcomposeFile := filepath.Join(tmpDir, \"compose.yaml\")\n\tcomposeContent := `\nthis is not valid yaml: [[[\n`\n\terr := os.WriteFile(composeFile, []byte(composeContent), 0o644)\n\trequire.NoError(t, err)\n\n\tservice, err := NewComposeService(nil)\n\trequire.NoError(t, err)\n\n\t// Should return an error for invalid YAML\n\tproject, err := service.LoadProject(t.Context(), api.ProjectLoadOptions{\n\t\tConfigPaths: []string{composeFile},\n\t})\n\n\trequire.Error(t, err)\n\tassert.Nil(t, project)\n}\n\nfunc TestLoadProject_MissingComposeFile(t *testing.T) {\n\tservice, err := NewComposeService(nil)\n\trequire.NoError(t, err)\n\n\t// Should return an error for missing file\n\tproject, err := service.LoadProject(t.Context(), api.ProjectLoadOptions{\n\t\tConfigPaths: []string{\"/nonexistent/compose.yaml\"},\n\t})\n\n\trequire.Error(t, err)\n\tassert.Nil(t, project)\n}\n"
  },
  {
    "path": "pkg/compose/logs.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"io\"\n\n\t\"github.com/containerd/errdefs\"\n\t\"github.com/moby/moby/api/pkg/stdcopy\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/sirupsen/logrus\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\nfunc (s *composeService) Logs(\n\tctx context.Context,\n\tprojectName string,\n\tconsumer api.LogConsumer,\n\toptions api.LogOptions,\n) error {\n\tvar containers Containers\n\tvar err error\n\n\tif options.Index > 0 {\n\t\tctr, err := s.getSpecifiedContainer(ctx, projectName, oneOffExclude, true, options.Services[0], options.Index)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tcontainers = append(containers, ctr)\n\t} else {\n\t\tcontainers, err = s.getContainers(ctx, projectName, oneOffExclude, true, options.Services...)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif options.Project != nil && len(options.Services) == 0 {\n\t\t// we run with an explicit compose.yaml, so only consider services defined in this file\n\t\toptions.Services = options.Project.ServiceNames()\n\t\tcontainers = containers.filter(isService(options.Services...))\n\t}\n\n\teg, ctx := errgroup.WithContext(ctx)\n\tfor _, ctr := range containers {\n\t\teg.Go(func() error {\n\t\t\terr := s.logContainer(ctx, consumer, ctr, options)\n\t\t\tif errdefs.IsNotImplemented(err) {\n\t\t\t\tlogrus.Warnf(\"Can't retrieve logs for %q: %s\", getCanonicalContainerName(ctr), err.Error())\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\treturn err\n\t\t})\n\t}\n\n\tif options.Follow {\n\t\tprinter := newLogPrinter(consumer)\n\n\t\tmonitor := newMonitor(s.apiClient(), projectName)\n\t\tif len(options.Services) > 0 {\n\t\t\tmonitor.withServices(options.Services)\n\t\t} else if options.Project != nil {\n\t\t\tmonitor.withServices(options.Project.ServiceNames())\n\t\t}\n\t\tmonitor.withListener(printer.HandleEvent)\n\t\tmonitor.withListener(func(event api.ContainerEvent) {\n\t\t\tif event.Type == api.ContainerEventStarted {\n\t\t\t\teg.Go(func() error {\n\t\t\t\t\tres, err := s.apiClient().ContainerInspect(ctx, event.ID, client.ContainerInspectOptions{})\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\n\t\t\t\t\terr = s.doLogContainer(ctx, consumer, event.Source, res.Container, api.LogOptions{\n\t\t\t\t\t\tFollow:     options.Follow,\n\t\t\t\t\t\tSince:      res.Container.State.StartedAt,\n\t\t\t\t\t\tUntil:      options.Until,\n\t\t\t\t\t\tTail:       options.Tail,\n\t\t\t\t\t\tTimestamps: options.Timestamps,\n\t\t\t\t\t})\n\t\t\t\t\tif errdefs.IsNotImplemented(err) {\n\t\t\t\t\t\t// ignore\n\t\t\t\t\t\treturn nil\n\t\t\t\t\t}\n\t\t\t\t\treturn err\n\t\t\t\t})\n\t\t\t}\n\t\t})\n\t\teg.Go(func() error {\n\t\t\t// pass ctx so monitor will immediately stop on SIGINT\n\t\t\treturn monitor.Start(ctx)\n\t\t})\n\t}\n\n\treturn eg.Wait()\n}\n\nfunc (s *composeService) logContainer(ctx context.Context, consumer api.LogConsumer, c container.Summary, options api.LogOptions) error {\n\tres, err := s.apiClient().ContainerInspect(ctx, c.ID, client.ContainerInspectOptions{})\n\tif err != nil {\n\t\treturn err\n\t}\n\tname := getContainerNameWithoutProject(c)\n\treturn s.doLogContainer(ctx, consumer, name, res.Container, options)\n}\n\nfunc (s *composeService) doLogContainer(ctx context.Context, consumer api.LogConsumer, name string, ctr container.InspectResponse, options api.LogOptions) error {\n\tr, err := s.apiClient().ContainerLogs(ctx, ctr.ID, client.ContainerLogsOptions{\n\t\tShowStdout: true,\n\t\tShowStderr: true,\n\t\tFollow:     options.Follow,\n\t\tSince:      options.Since,\n\t\tUntil:      options.Until,\n\t\tTail:       options.Tail,\n\t\tTimestamps: options.Timestamps,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer r.Close() //nolint:errcheck\n\n\tw := utils.GetWriter(func(line string) {\n\t\tconsumer.Log(name, line)\n\t})\n\tif ctr.Config.Tty {\n\t\t_, err = io.Copy(w, r)\n\t} else {\n\t\t_, err = stdcopy.StdCopy(w, w, r)\n\t}\n\treturn err\n}\n"
  },
  {
    "path": "pkg/compose/logs_test.go",
    "content": "/*\n   Copyright 2022 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"io\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/docker/pkg/stdcopy\"\n\tcontainerType \"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/require\"\n\t\"go.uber.org/mock/gomock\"\n\n\tcompose \"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc TestComposeService_Logs_Demux(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\trequire.NoError(t, err)\n\n\tname := strings.ToLower(testProject)\n\n\tapi.EXPECT().ContainerList(t.Context(), client.ContainerListOptions{\n\t\tAll:     true,\n\t\tFilters: projectFilter(name).Add(\"label\", oneOffFilter(false), hasConfigHashLabel()),\n\t}).Return(\n\t\tclient.ContainerListResult{\n\t\t\tItems: []containerType.Summary{\n\t\t\t\ttestContainer(\"service\", \"c\", false),\n\t\t\t},\n\t\t},\n\t\tnil,\n\t)\n\n\tapi.EXPECT().\n\t\tContainerInspect(anyCancellableContext(), \"c\", gomock.Any()).\n\t\tReturn(client.ContainerInspectResult{\n\t\t\tContainer: containerType.InspectResponse{\n\t\t\t\tID:     \"c\",\n\t\t\t\tConfig: &containerType.Config{Tty: false},\n\t\t\t},\n\t\t}, nil)\n\tc1Reader, c1Writer := io.Pipe()\n\tt.Cleanup(func() {\n\t\t_ = c1Reader.Close()\n\t\t_ = c1Writer.Close()\n\t})\n\tc1Stdout := stdcopy.NewStdWriter(c1Writer, stdcopy.Stdout)\n\tc1Stderr := stdcopy.NewStdWriter(c1Writer, stdcopy.Stderr)\n\tgo func() {\n\t\t_, err := c1Stdout.Write([]byte(\"hello stdout\\n\"))\n\t\tassert.NoError(t, err, \"Writing to fake stdout\")\n\t\t_, err = c1Stderr.Write([]byte(\"hello stderr\\n\"))\n\t\tassert.NoError(t, err, \"Writing to fake stderr\")\n\t\t_ = c1Writer.Close()\n\t}()\n\tapi.EXPECT().ContainerLogs(anyCancellableContext(), \"c\", gomock.Any()).\n\t\tReturn(c1Reader, nil)\n\n\topts := compose.LogOptions{\n\t\tProject: &types.Project{\n\t\t\tServices: types.Services{\n\t\t\t\t\"service\": {Name: \"service\"},\n\t\t\t},\n\t\t},\n\t}\n\n\tconsumer := &testLogConsumer{}\n\terr = tested.Logs(t.Context(), name, consumer, opts)\n\trequire.NoError(t, err)\n\n\trequire.Equal(\n\t\tt,\n\t\t[]string{\"hello stdout\", \"hello stderr\"},\n\t\tconsumer.LogsForContainer(\"c\"),\n\t)\n}\n\n// TestComposeService_Logs_ServiceFiltering ensures that we do not include\n// logs from out-of-scope services based on the Compose file vs actual state.\n//\n// NOTE(milas): This test exists because each method is currently duplicating\n// a lot of the project/service filtering logic. We should consider moving it\n// to an earlier point in the loading process, at which point this test could\n// safely be removed.\nfunc TestComposeService_Logs_ServiceFiltering(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\trequire.NoError(t, err)\n\n\tname := strings.ToLower(testProject)\n\n\tapi.EXPECT().ContainerList(t.Context(), client.ContainerListOptions{\n\t\tAll:     true,\n\t\tFilters: projectFilter(name).Add(\"label\", oneOffFilter(false), hasConfigHashLabel()),\n\t}).Return(\n\t\tclient.ContainerListResult{\n\t\t\tItems: []containerType.Summary{\n\t\t\t\ttestContainer(\"serviceA\", \"c1\", false),\n\t\t\t\ttestContainer(\"serviceA\", \"c2\", false),\n\t\t\t\t// serviceB will be filtered out by the project definition to\n\t\t\t\t// ensure we ignore \"orphan\" containers\n\t\t\t\ttestContainer(\"serviceB\", \"c3\", false),\n\t\t\t\ttestContainer(\"serviceC\", \"c4\", false),\n\t\t\t},\n\t\t},\n\t\tnil,\n\t)\n\n\tfor _, id := range []string{\"c1\", \"c2\", \"c4\"} {\n\t\tapi.EXPECT().\n\t\t\tContainerInspect(anyCancellableContext(), id, gomock.Any()).\n\t\t\tReturn(\n\t\t\t\tclient.ContainerInspectResult{\n\t\t\t\t\tContainer: containerType.InspectResponse{\n\t\t\t\t\t\tID:     id,\n\t\t\t\t\t\tConfig: &containerType.Config{Tty: true},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tnil,\n\t\t\t)\n\t\tapi.EXPECT().ContainerLogs(anyCancellableContext(), id, gomock.Any()).\n\t\t\tReturn(io.NopCloser(strings.NewReader(\"hello \"+id+\"\\n\")), nil).\n\t\t\tTimes(1)\n\t}\n\n\t// this simulates passing `--filename` with a Compose file that does NOT\n\t// reference `serviceB` even though it has running services for this proj\n\tproj := &types.Project{\n\t\tServices: types.Services{\n\t\t\t\"serviceA\": {Name: \"serviceA\"},\n\t\t\t\"serviceC\": {Name: \"serviceC\"},\n\t\t},\n\t}\n\tconsumer := &testLogConsumer{}\n\topts := compose.LogOptions{\n\t\tProject: proj,\n\t}\n\terr = tested.Logs(t.Context(), name, consumer, opts)\n\trequire.NoError(t, err)\n\n\trequire.Equal(t, []string{\"hello c1\"}, consumer.LogsForContainer(\"c1\"))\n\trequire.Equal(t, []string{\"hello c2\"}, consumer.LogsForContainer(\"c2\"))\n\trequire.Empty(t, consumer.LogsForContainer(\"c3\"))\n\trequire.Equal(t, []string{\"hello c4\"}, consumer.LogsForContainer(\"c4\"))\n}\n\ntype testLogConsumer struct {\n\tmu sync.Mutex\n\t// logs is keyed by container ID; values are log lines\n\tlogs map[string][]string\n}\n\nfunc (l *testLogConsumer) Log(containerName, message string) {\n\tl.mu.Lock()\n\tdefer l.mu.Unlock()\n\tif l.logs == nil {\n\t\tl.logs = make(map[string][]string)\n\t}\n\tl.logs[containerName] = append(l.logs[containerName], message)\n}\n\nfunc (l *testLogConsumer) Err(containerName, message string) {\n\tl.Log(containerName, message)\n}\n\nfunc (l *testLogConsumer) Status(containerName, msg string) {}\n\nfunc (l *testLogConsumer) LogsForContainer(containerName string) []string {\n\tl.mu.Lock()\n\tdefer l.mu.Unlock()\n\treturn l.logs[containerName]\n}\n"
  },
  {
    "path": "pkg/compose/ls.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"slices\"\n\t\"sort\"\n\t\"strings\"\n\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/sirupsen/logrus\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) List(ctx context.Context, opts api.ListOptions) ([]api.Stack, error) {\n\tlist, err := s.apiClient().ContainerList(ctx, client.ContainerListOptions{\n\t\tFilters: make(client.Filters).Add(\"label\", api.ProjectLabel).Add(\"label\", api.ConfigHashLabel),\n\t\tAll:     opts.All,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn containersToStacks(list.Items)\n}\n\nfunc containersToStacks(containers []container.Summary) ([]api.Stack, error) {\n\tcontainersByLabel, keys, err := groupContainerByLabel(containers, api.ProjectLabel)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tvar projects []api.Stack\n\tfor _, project := range keys {\n\t\tconfigFiles, err := combinedConfigFiles(containersByLabel[project])\n\t\tif err != nil {\n\t\t\tlogrus.Warn(err.Error())\n\t\t\tconfigFiles = \"N/A\"\n\t\t}\n\n\t\tprojects = append(projects, api.Stack{\n\t\t\tID:          project,\n\t\t\tName:        project,\n\t\t\tStatus:      combinedStatus(containerToState(containersByLabel[project])),\n\t\t\tConfigFiles: configFiles,\n\t\t})\n\t}\n\treturn projects, nil\n}\n\nfunc combinedConfigFiles(containers []container.Summary) (string, error) {\n\tconfigFiles := []string{}\n\n\tfor _, c := range containers {\n\t\tfiles, ok := c.Labels[api.ConfigFilesLabel]\n\t\tif !ok {\n\t\t\treturn \"\", fmt.Errorf(\"no label %q set on container %q of compose project\", api.ConfigFilesLabel, c.ID)\n\t\t}\n\n\t\tfor f := range strings.SplitSeq(files, \",\") {\n\t\t\tif !slices.Contains(configFiles, f) {\n\t\t\t\tconfigFiles = append(configFiles, f)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn strings.Join(configFiles, \",\"), nil\n}\n\nfunc containerToState(containers []container.Summary) []string {\n\tstatuses := []string{}\n\tfor _, c := range containers {\n\t\tstatuses = append(statuses, string(c.State))\n\t}\n\treturn statuses\n}\n\nfunc combinedStatus(statuses []string) string {\n\tnbByStatus := map[string]int{}\n\tkeys := []string{}\n\tfor _, status := range statuses {\n\t\tnb, ok := nbByStatus[status]\n\t\tif !ok {\n\t\t\tnb = 0\n\t\t\tkeys = append(keys, status)\n\t\t}\n\t\tnbByStatus[status] = nb + 1\n\t}\n\tsort.Strings(keys)\n\tresult := \"\"\n\tfor _, status := range keys {\n\t\tnb := nbByStatus[status]\n\t\tif result != \"\" {\n\t\t\tresult += \", \"\n\t\t}\n\t\tresult += fmt.Sprintf(\"%s(%d)\", status, nb)\n\t}\n\treturn result\n}\n\nfunc groupContainerByLabel(containers []container.Summary, labelName string) (map[string][]container.Summary, []string, error) {\n\tcontainersByLabel := map[string][]container.Summary{}\n\tkeys := []string{}\n\tfor _, c := range containers {\n\t\tlabel, ok := c.Labels[labelName]\n\t\tif !ok {\n\t\t\treturn nil, nil, fmt.Errorf(\"no label %q set on container %q of compose project\", labelName, c.ID)\n\t\t}\n\t\tlabelContainers, ok := containersByLabel[label]\n\t\tif !ok {\n\t\t\tlabelContainers = []container.Summary{}\n\t\t\tkeys = append(keys, label)\n\t\t}\n\t\tlabelContainers = append(labelContainers, c)\n\t\tcontainersByLabel[label] = labelContainers\n\t}\n\tsort.Strings(keys)\n\treturn containersByLabel, keys, nil\n}\n"
  },
  {
    "path": "pkg/compose/ls_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"fmt\"\n\t\"testing\"\n\n\t\"github.com/moby/moby/api/types/container\"\n\t\"gotest.tools/v3/assert\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc TestContainersToStacks(t *testing.T) {\n\tcontainers := []container.Summary{\n\t\t{\n\t\t\tID:     \"service1\",\n\t\t\tState:  \"running\",\n\t\t\tLabels: map[string]string{api.ProjectLabel: \"project1\", api.ConfigFilesLabel: \"/home/docker-compose.yaml\"},\n\t\t},\n\t\t{\n\t\t\tID:     \"service2\",\n\t\t\tState:  \"running\",\n\t\t\tLabels: map[string]string{api.ProjectLabel: \"project1\", api.ConfigFilesLabel: \"/home/docker-compose.yaml\"},\n\t\t},\n\t\t{\n\t\t\tID:     \"service3\",\n\t\t\tState:  \"running\",\n\t\t\tLabels: map[string]string{api.ProjectLabel: \"project2\", api.ConfigFilesLabel: \"/home/project2-docker-compose.yaml\"},\n\t\t},\n\t}\n\tstacks, err := containersToStacks(containers)\n\tassert.NilError(t, err)\n\tassert.DeepEqual(t, stacks, []api.Stack{\n\t\t{\n\t\t\tID:          \"project1\",\n\t\t\tName:        \"project1\",\n\t\t\tStatus:      \"running(2)\",\n\t\t\tConfigFiles: \"/home/docker-compose.yaml\",\n\t\t},\n\t\t{\n\t\t\tID:          \"project2\",\n\t\t\tName:        \"project2\",\n\t\t\tStatus:      \"running(1)\",\n\t\t\tConfigFiles: \"/home/project2-docker-compose.yaml\",\n\t\t},\n\t})\n}\n\nfunc TestStacksMixedStatus(t *testing.T) {\n\tassert.Equal(t, combinedStatus([]string{\"running\"}), \"running(1)\")\n\tassert.Equal(t, combinedStatus([]string{\"running\", \"running\", \"running\"}), \"running(3)\")\n\tassert.Equal(t, combinedStatus([]string{\"running\", \"exited\", \"running\"}), \"exited(1), running(2)\")\n}\n\nfunc TestCombinedConfigFiles(t *testing.T) {\n\tcontainersByLabel := map[string][]container.Summary{\n\t\t\"project1\": {\n\t\t\t{\n\t\t\t\tID:     \"service1\",\n\t\t\t\tState:  \"running\",\n\t\t\t\tLabels: map[string]string{api.ProjectLabel: \"project1\", api.ConfigFilesLabel: \"/home/docker-compose.yaml\"},\n\t\t\t},\n\t\t\t{\n\t\t\t\tID:     \"service2\",\n\t\t\t\tState:  \"running\",\n\t\t\t\tLabels: map[string]string{api.ProjectLabel: \"project1\", api.ConfigFilesLabel: \"/home/docker-compose.yaml\"},\n\t\t\t},\n\t\t},\n\t\t\"project2\": {\n\t\t\t{\n\t\t\t\tID:     \"service3\",\n\t\t\t\tState:  \"running\",\n\t\t\t\tLabels: map[string]string{api.ProjectLabel: \"project2\", api.ConfigFilesLabel: \"/home/project2-docker-compose.yaml\"},\n\t\t\t},\n\t\t},\n\t\t\"project3\": {\n\t\t\t{\n\t\t\t\tID:     \"service4\",\n\t\t\t\tState:  \"running\",\n\t\t\t\tLabels: map[string]string{api.ProjectLabel: \"project3\"},\n\t\t\t},\n\t\t},\n\t}\n\n\ttestData := map[string]struct {\n\t\tConfigFiles string\n\t\tError       error\n\t}{\n\t\t\"project1\": {ConfigFiles: \"/home/docker-compose.yaml\", Error: nil},\n\t\t\"project2\": {ConfigFiles: \"/home/project2-docker-compose.yaml\", Error: nil},\n\t\t\"project3\": {ConfigFiles: \"\", Error: fmt.Errorf(\"no label %q set on container %q of compose project\", api.ConfigFilesLabel, \"service4\")},\n\t}\n\n\tfor project, containers := range containersByLabel {\n\t\tconfigFiles, err := combinedConfigFiles(containers)\n\n\t\texpected := testData[project]\n\n\t\tif expected.Error != nil {\n\t\t\tassert.Error(t, err, expected.Error.Error())\n\t\t} else {\n\t\t\tassert.Equal(t, err, expected.Error)\n\t\t}\n\t\tassert.Equal(t, configFiles, expected.ConfigFiles)\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/model.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"bufio\"\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"os/exec\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/errdefs\"\n\t\"github.com/docker/cli/cli-plugins/manager\"\n\t\"github.com/docker/docker/api/types/versions\"\n\t\"github.com/spf13/cobra\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) ensureModels(ctx context.Context, project *types.Project, quietPull bool) error {\n\tif len(project.Models) == 0 {\n\t\treturn nil\n\t}\n\n\tmdlAPI, err := s.newModelAPI(project)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer mdlAPI.Close()\n\tavailableModels, err := mdlAPI.ListModels(ctx)\n\n\teg, ctx := errgroup.WithContext(ctx)\n\teg.Go(func() error {\n\t\treturn mdlAPI.SetModelVariables(ctx, project)\n\t})\n\n\tfor name, config := range project.Models {\n\t\tif config.Name == \"\" {\n\t\t\tconfig.Name = name\n\t\t}\n\t\teg.Go(func() error {\n\t\t\tif !slices.Contains(availableModels, config.Model) {\n\t\t\t\terr = mdlAPI.PullModel(ctx, config, quietPull, s.events)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn mdlAPI.ConfigureModel(ctx, config, s.events)\n\t\t})\n\t}\n\treturn eg.Wait()\n}\n\ntype modelAPI struct {\n\tpath    string\n\tenv     []string\n\tprepare func(ctx context.Context, cmd *exec.Cmd) error\n\tcleanup func()\n\tversion string\n}\n\nfunc (s *composeService) newModelAPI(project *types.Project) (*modelAPI, error) {\n\tdockerModel, err := manager.GetPlugin(\"model\", s.dockerCli, &cobra.Command{})\n\tif err != nil {\n\t\tif errdefs.IsNotFound(err) {\n\t\t\treturn nil, fmt.Errorf(\"'models' support requires Docker Model plugin\")\n\t\t}\n\t\treturn nil, err\n\t}\n\tif dockerModel.Err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to load Docker Model plugin: %w\", dockerModel.Err)\n\t}\n\tendpoint, cleanup, err := s.propagateDockerEndpoint()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn &modelAPI{\n\t\tpath:    dockerModel.Path,\n\t\tversion: dockerModel.Version,\n\t\tprepare: func(ctx context.Context, cmd *exec.Cmd) error {\n\t\t\treturn s.prepareShellOut(ctx, project.Environment, cmd)\n\t\t},\n\t\tcleanup: cleanup,\n\t\tenv:     append(project.Environment.Values(), endpoint...),\n\t}, nil\n}\n\nfunc (m *modelAPI) Close() {\n\tm.cleanup()\n}\n\nfunc (m *modelAPI) PullModel(ctx context.Context, model types.ModelConfig, quietPull bool, events api.EventProcessor) error {\n\tevents.On(api.Resource{\n\t\tID:     model.Name,\n\t\tStatus: api.Working,\n\t\tText:   api.StatusPulling,\n\t})\n\n\tcmd := exec.CommandContext(ctx, m.path, \"pull\", model.Model)\n\terr := m.prepare(ctx, cmd)\n\tif err != nil {\n\t\treturn err\n\t}\n\tstream, err := cmd.StdoutPipe()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = cmd.Start()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tscanner := bufio.NewScanner(stream)\n\tfor scanner.Scan() {\n\t\tmsg := scanner.Text()\n\t\tif msg == \"\" {\n\t\t\tcontinue\n\t\t}\n\n\t\tif !quietPull {\n\t\t\tevents.On(api.Resource{\n\t\t\t\tID:     model.Name,\n\t\t\t\tStatus: api.Working,\n\t\t\t\tText:   api.StatusPulling,\n\t\t\t})\n\t\t}\n\t}\n\n\terr = cmd.Wait()\n\tif err != nil {\n\t\tevents.On(errorEvent(model.Name, err.Error()))\n\t}\n\tevents.On(api.Resource{\n\t\tID:     model.Name,\n\t\tStatus: api.Working,\n\t\tText:   api.StatusPulled,\n\t})\n\treturn err\n}\n\nfunc (m *modelAPI) ConfigureModel(ctx context.Context, config types.ModelConfig, events api.EventProcessor) error {\n\tevents.On(api.Resource{\n\t\tID:     config.Name,\n\t\tStatus: api.Working,\n\t\tText:   api.StatusConfiguring,\n\t})\n\t// configure [--context-size=<n>] MODEL [-- <runtime-flags...>]\n\targs := []string{\"configure\"}\n\tif config.ContextSize > 0 {\n\t\targs = append(args, \"--context-size\", strconv.Itoa(config.ContextSize))\n\t}\n\targs = append(args, config.Model)\n\t// Only append RuntimeFlags if docker model CLI version is >= v1.0.6\n\tif len(config.RuntimeFlags) != 0 && m.supportsRuntimeFlags() {\n\t\targs = append(args, \"--\")\n\t\targs = append(args, config.RuntimeFlags...)\n\t}\n\tcmd := exec.CommandContext(ctx, m.path, args...)\n\terr := m.prepare(ctx, cmd)\n\tif err != nil {\n\t\treturn err\n\t}\n\terr = cmd.Run()\n\tif err != nil {\n\t\tevents.On(errorEvent(config.Name, err.Error()))\n\t\treturn err\n\t}\n\tevents.On(api.Resource{\n\t\tID:     config.Name,\n\t\tStatus: api.Done,\n\t\tText:   api.StatusConfigured,\n\t})\n\treturn nil\n}\n\nfunc (m *modelAPI) SetModelVariables(ctx context.Context, project *types.Project) error {\n\tcmd := exec.CommandContext(ctx, m.path, \"status\", \"--json\")\n\terr := m.prepare(ctx, cmd)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tstatusOut, err := cmd.CombinedOutput()\n\tif err != nil {\n\t\treturn fmt.Errorf(\"error checking docker-model status: %w\", err)\n\t}\n\ttype Status struct {\n\t\tEndpoint string `json:\"endpoint\"`\n\t}\n\n\tvar status Status\n\terr = json.Unmarshal(statusOut, &status)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfor _, service := range project.Services {\n\t\tfor ref, modelConfig := range service.Models {\n\t\t\tmodel := project.Models[ref]\n\t\t\tvarPrefix := strings.ReplaceAll(strings.ToUpper(ref), \"-\", \"_\")\n\t\t\tvar variable string\n\t\t\tif modelConfig != nil && modelConfig.ModelVariable != \"\" {\n\t\t\t\tvariable = modelConfig.ModelVariable\n\t\t\t} else {\n\t\t\t\tvariable = varPrefix + \"_MODEL\"\n\t\t\t}\n\t\t\tservice.Environment[variable] = &model.Model\n\n\t\t\tif modelConfig != nil && modelConfig.EndpointVariable != \"\" {\n\t\t\t\tvariable = modelConfig.EndpointVariable\n\t\t\t} else {\n\t\t\t\tvariable = varPrefix + \"_URL\"\n\t\t\t}\n\t\t\tservice.Environment[variable] = &status.Endpoint\n\t\t}\n\t}\n\treturn nil\n}\n\ntype Model struct {\n\tId      string   `json:\"id\"`\n\tTags    []string `json:\"tags\"`\n\tCreated int      `json:\"created\"`\n\tConfig  struct {\n\t\tFormat       string `json:\"format\"`\n\t\tQuantization string `json:\"quantization\"`\n\t\tParameters   string `json:\"parameters\"`\n\t\tArchitecture string `json:\"architecture\"`\n\t\tSize         string `json:\"size\"`\n\t} `json:\"config\"`\n}\n\nfunc (m *modelAPI) ListModels(ctx context.Context) ([]string, error) {\n\tcmd := exec.CommandContext(ctx, m.path, \"ls\", \"--json\")\n\terr := m.prepare(ctx, cmd)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\toutput, err := cmd.CombinedOutput()\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error checking available models: %w\", err)\n\t}\n\n\ttype AvailableModel struct {\n\t\tId      string   `json:\"id\"`\n\t\tTags    []string `json:\"tags\"`\n\t\tCreated int      `json:\"created\"`\n\t}\n\n\tmodels := []AvailableModel{}\n\terr = json.Unmarshal(output, &models)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error unmarshalling available models: %w\", err)\n\t}\n\tvar availableModels []string\n\tfor _, model := range models {\n\t\tavailableModels = append(availableModels, model.Tags...)\n\t}\n\treturn availableModels, nil\n}\n\n// supportsRuntimeFlags checks if the docker model version supports runtime flags\n// Runtime flags are supported in version >= v1.0.6\nfunc (m *modelAPI) supportsRuntimeFlags() bool {\n\t// If version is not cached, don't append runtime flags to be safe\n\tif m.version == \"\" {\n\t\treturn false\n\t}\n\n\t// Strip 'v' prefix if present (e.g., \"v1.0.6\" -> \"1.0.6\")\n\tversionStr := strings.TrimPrefix(m.version, \"v\")\n\treturn !versions.LessThan(versionStr, \"1.0.6\")\n}\n"
  },
  {
    "path": "pkg/compose/monitor.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"strconv\"\n\n\t\"github.com/containerd/errdefs\"\n\t\"github.com/moby/moby/api/types/events\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/sirupsen/logrus\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\ntype monitor struct {\n\tapiClient client.APIClient\n\tproject   string\n\t// services tells us which service to consider and those we can ignore, maybe ran by a concurrent compose command\n\tservices  map[string]bool\n\tlisteners []api.ContainerEventListener\n}\n\nfunc newMonitor(apiClient client.APIClient, project string) *monitor {\n\treturn &monitor{\n\t\tapiClient: apiClient,\n\t\tproject:   project,\n\t\tservices:  map[string]bool{},\n\t}\n}\n\nfunc (c *monitor) withServices(services []string) {\n\tfor _, name := range services {\n\t\tc.services[name] = true\n\t}\n}\n\n// Start runs monitor to detect application events and return after termination\n//\n//nolint:gocyclo\nfunc (c *monitor) Start(ctx context.Context) error {\n\t// collect initial application container\n\tinitialState, err := c.apiClient.ContainerList(ctx, client.ContainerListOptions{\n\t\tAll: true,\n\t\tFilters: projectFilter(c.project).Add(\"label\",\n\t\t\toneOffFilter(false),\n\t\t\thasConfigHashLabel(),\n\t\t),\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// containers is the set if container IDs the application is based on\n\tcontainers := utils.Set[string]{}\n\tfor _, ctr := range initialState.Items {\n\t\tif len(c.services) == 0 || c.services[ctr.Labels[api.ServiceLabel]] {\n\t\t\tcontainers.Add(ctr.ID)\n\t\t}\n\t}\n\trestarting := utils.Set[string]{}\n\n\tres := c.apiClient.Events(ctx, client.EventsListOptions{\n\t\tFilters: projectFilter(c.project).Add(\"type\", \"container\"),\n\t})\n\tfor {\n\t\tif len(containers) == 0 {\n\t\t\treturn nil\n\t\t}\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\treturn nil\n\t\tcase err := <-res.Err:\n\t\t\treturn err\n\t\tcase event := <-res.Messages:\n\t\t\tif len(c.services) > 0 && !c.services[event.Actor.Attributes[api.ServiceLabel]] {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tctr, err := c.getContainerSummary(event)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tswitch event.Action {\n\t\t\tcase events.ActionCreate:\n\t\t\t\tif len(c.services) == 0 || c.services[ctr.Labels[api.ServiceLabel]] {\n\t\t\t\t\tcontainers.Add(ctr.ID)\n\t\t\t\t}\n\t\t\t\tevtType := api.ContainerEventCreated\n\t\t\t\tif _, ok := ctr.Labels[api.ContainerReplaceLabel]; ok {\n\t\t\t\t\tevtType = api.ContainerEventRecreated\n\t\t\t\t}\n\t\t\t\tfor _, listener := range c.listeners {\n\t\t\t\t\tlistener(newContainerEvent(event.TimeNano, ctr, evtType))\n\t\t\t\t}\n\t\t\t\tlogrus.Debugf(\"container %s created\", ctr.Name)\n\t\t\tcase events.ActionStart:\n\t\t\t\trestarted := restarting.Has(ctr.ID)\n\t\t\t\tif restarted {\n\t\t\t\t\tlogrus.Debugf(\"container %s restarted\", ctr.Name)\n\t\t\t\t\tfor _, listener := range c.listeners {\n\t\t\t\t\t\tlistener(newContainerEvent(event.TimeNano, ctr, api.ContainerEventStarted, func(e *api.ContainerEvent) {\n\t\t\t\t\t\t\te.Restarting = restarted\n\t\t\t\t\t\t}))\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tlogrus.Debugf(\"container %s started\", ctr.Name)\n\t\t\t\t\tfor _, listener := range c.listeners {\n\t\t\t\t\t\tlistener(newContainerEvent(event.TimeNano, ctr, api.ContainerEventStarted))\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif len(c.services) == 0 || c.services[ctr.Labels[api.ServiceLabel]] {\n\t\t\t\t\tcontainers.Add(ctr.ID)\n\t\t\t\t}\n\t\t\tcase events.ActionRestart:\n\t\t\t\tfor _, listener := range c.listeners {\n\t\t\t\t\tlistener(newContainerEvent(event.TimeNano, ctr, api.ContainerEventRestarted))\n\t\t\t\t}\n\t\t\t\tlogrus.Debugf(\"container %s restarted\", ctr.Name)\n\t\t\tcase events.ActionDie:\n\t\t\t\tlogrus.Debugf(\"container %s exited with code %d\", ctr.Name, ctr.ExitCode)\n\t\t\t\tinspect, err := c.apiClient.ContainerInspect(ctx, event.Actor.ID, client.ContainerInspectOptions{})\n\t\t\t\tif errdefs.IsNotFound(err) {\n\t\t\t\t\t// Source is already removed\n\t\t\t\t} else if err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\n\t\t\t\tif inspect.Container.State != nil && (inspect.Container.State.Restarting || inspect.Container.State.Running) {\n\t\t\t\t\t// State.Restarting is set by engine when container is configured to restart on exit\n\t\t\t\t\t// on ContainerRestart it doesn't (see https://github.com/moby/moby/issues/45538)\n\t\t\t\t\t// container state still is reported as \"running\"\n\t\t\t\t\tlogrus.Debugf(\"container %s is restarting\", ctr.Name)\n\t\t\t\t\trestarting.Add(ctr.ID)\n\t\t\t\t\tfor _, listener := range c.listeners {\n\t\t\t\t\t\tlistener(newContainerEvent(event.TimeNano, ctr, api.ContainerEventExited, func(e *api.ContainerEvent) {\n\t\t\t\t\t\t\te.Restarting = true\n\t\t\t\t\t\t}))\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tfor _, listener := range c.listeners {\n\t\t\t\t\t\tlistener(newContainerEvent(event.TimeNano, ctr, api.ContainerEventExited))\n\t\t\t\t\t}\n\t\t\t\t\tcontainers.Remove(ctr.ID)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc newContainerEvent(timeNano int64, ctr *api.ContainerSummary, eventType int, opts ...func(e *api.ContainerEvent)) api.ContainerEvent {\n\tname := ctr.Name\n\tdefaultName := getDefaultContainerName(ctr.Project, ctr.Labels[api.ServiceLabel], ctr.Labels[api.ContainerNumberLabel])\n\tif name == defaultName {\n\t\t// remove project- prefix\n\t\tname = name[len(ctr.Project)+1:]\n\t}\n\n\tevent := api.ContainerEvent{\n\t\tType:      eventType,\n\t\tContainer: ctr,\n\t\tTime:      timeNano,\n\t\tSource:    name,\n\t\tID:        ctr.ID,\n\t\tService:   ctr.Service,\n\t\tExitCode:  ctr.ExitCode,\n\t}\n\tfor _, opt := range opts {\n\t\topt(&event)\n\t}\n\treturn event\n}\n\nfunc (c *monitor) getContainerSummary(event events.Message) (*api.ContainerSummary, error) {\n\tctr := &api.ContainerSummary{\n\t\tID:      event.Actor.ID,\n\t\tName:    event.Actor.Attributes[\"name\"],\n\t\tProject: c.project,\n\t\tService: event.Actor.Attributes[api.ServiceLabel],\n\t\tLabels:  event.Actor.Attributes, // More than just labels, but that'c the closest the API gives us\n\t}\n\tif ec, ok := event.Actor.Attributes[\"exitCode\"]; ok {\n\t\texitCode, err := strconv.Atoi(ec)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tctr.ExitCode = exitCode\n\t}\n\treturn ctr, nil\n}\n\nfunc (c *monitor) withListener(listener api.ContainerEventListener) {\n\tc.listeners = append(c.listeners, listener)\n}\n"
  },
  {
    "path": "pkg/compose/pause.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"strings\"\n\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Pause(ctx context.Context, projectName string, options api.PauseOptions) error {\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.pause(ctx, strings.ToLower(projectName), options)\n\t}, \"pause\", s.events)\n}\n\nfunc (s *composeService) pause(ctx context.Context, projectName string, options api.PauseOptions) error {\n\tcontainers, err := s.getContainers(ctx, projectName, oneOffExclude, false, options.Services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif options.Project != nil {\n\t\tcontainers = containers.filter(isService(options.Project.ServiceNames()...))\n\t}\n\n\teg, ctx := errgroup.WithContext(ctx)\n\tcontainers.forEach(func(container container.Summary) {\n\t\teg.Go(func() error {\n\t\t\t_, err := s.apiClient().ContainerPause(ctx, container.ID, client.ContainerPauseOptions{})\n\t\t\tif err == nil {\n\t\t\t\teventName := getContainerProgressName(container)\n\t\t\t\ts.events.On(newEvent(eventName, api.Done, \"Paused\"))\n\t\t\t}\n\t\t\treturn err\n\t\t})\n\t})\n\treturn eg.Wait()\n}\n\nfunc (s *composeService) UnPause(ctx context.Context, projectName string, options api.PauseOptions) error {\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.unPause(ctx, strings.ToLower(projectName), options)\n\t}, \"unpause\", s.events)\n}\n\nfunc (s *composeService) unPause(ctx context.Context, projectName string, options api.PauseOptions) error {\n\tcontainers, err := s.getContainers(ctx, projectName, oneOffExclude, false, options.Services...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif options.Project != nil {\n\t\tcontainers = containers.filter(isService(options.Project.ServiceNames()...))\n\t}\n\n\teg, ctx := errgroup.WithContext(ctx)\n\tcontainers.forEach(func(ctr container.Summary) {\n\t\teg.Go(func() error {\n\t\t\t_, err = s.apiClient().ContainerUnpause(ctx, ctr.ID, client.ContainerUnpauseOptions{})\n\t\t\tif err == nil {\n\t\t\t\teventName := getContainerProgressName(ctr)\n\t\t\t\ts.events.On(newEvent(eventName, api.Done, \"Unpaused\"))\n\t\t\t}\n\t\t\treturn err\n\t\t})\n\t})\n\treturn eg.Wait()\n}\n"
  },
  {
    "path": "pkg/compose/plugins.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/errdefs\"\n\t\"github.com/docker/cli/cli-plugins/manager\"\n\t\"github.com/docker/cli/cli/config\"\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/spf13/cobra\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\ntype JsonMessage struct {\n\tType    string `json:\"type\"`\n\tMessage string `json:\"message\"`\n}\n\nconst (\n\tErrorType                 = \"error\"\n\tInfoType                  = \"info\"\n\tSetEnvType                = \"setenv\"\n\tDebugType                 = \"debug\"\n\tproviderMetadataDirectory = \"compose/providers\"\n)\n\nvar mux sync.Mutex\n\nfunc (s *composeService) runPlugin(ctx context.Context, project *types.Project, service types.ServiceConfig, command string) error {\n\tprovider := *service.Provider\n\n\tplugin, err := s.getPluginBinaryPath(provider.Type)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tcmd, err := s.setupPluginCommand(ctx, project, service, plugin, command)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvariables, err := s.executePlugin(cmd, command, service)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tmux.Lock()\n\tdefer mux.Unlock()\n\tfor name, s := range project.Services {\n\t\tif _, ok := s.DependsOn[service.Name]; ok {\n\t\t\tprefix := strings.ToUpper(service.Name) + \"_\"\n\t\t\tfor key, val := range variables {\n\t\t\t\ts.Environment[prefix+key] = &val\n\t\t\t}\n\t\t\tproject.Services[name] = s\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (s *composeService) executePlugin(cmd *exec.Cmd, command string, service types.ServiceConfig) (types.Mapping, error) {\n\tvar action string\n\tswitch command {\n\tcase \"up\":\n\t\ts.events.On(creatingEvent(service.Name))\n\t\taction = \"create\"\n\tcase \"down\":\n\t\ts.events.On(removingEvent(service.Name))\n\t\taction = \"remove\"\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"unsupported plugin command: %s\", command)\n\t}\n\n\tstdout, err := cmd.StdoutPipe()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\terr = cmd.Start()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tdecoder := json.NewDecoder(stdout)\n\tdefer func() { _ = stdout.Close() }()\n\n\tvariables := types.Mapping{}\n\n\tfor {\n\t\tvar msg JsonMessage\n\t\terr = decoder.Decode(&msg)\n\t\tif errors.Is(err, io.EOF) {\n\t\t\tbreak\n\t\t}\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tswitch msg.Type {\n\t\tcase ErrorType:\n\t\t\ts.events.On(newEvent(service.Name, api.Error, msg.Message))\n\t\t\treturn nil, errors.New(msg.Message)\n\t\tcase InfoType:\n\t\t\ts.events.On(newEvent(service.Name, api.Working, msg.Message))\n\t\tcase SetEnvType:\n\t\t\tkey, val, found := strings.Cut(msg.Message, \"=\")\n\t\t\tif !found {\n\t\t\t\treturn nil, fmt.Errorf(\"invalid response from plugin: %s\", msg.Message)\n\t\t\t}\n\t\t\tvariables[key] = val\n\t\tcase DebugType:\n\t\t\tlogrus.Debugf(\"%s: %s\", service.Name, msg.Message)\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"invalid response from plugin: %s\", msg.Type)\n\t\t}\n\t}\n\n\terr = cmd.Wait()\n\tif err != nil {\n\t\ts.events.On(errorEvent(service.Name, err.Error()))\n\t\treturn nil, fmt.Errorf(\"failed to %s service provider: %s\", action, err.Error())\n\t}\n\tswitch command {\n\tcase \"up\":\n\t\ts.events.On(createdEvent(service.Name))\n\tcase \"down\":\n\t\ts.events.On(removedEvent(service.Name))\n\t}\n\treturn variables, nil\n}\n\nfunc (s *composeService) getPluginBinaryPath(provider string) (path string, err error) {\n\tif provider == \"compose\" {\n\t\treturn \"\", errors.New(\"'compose' is not a valid provider type\")\n\t}\n\tplugin, err := manager.GetPlugin(provider, s.dockerCli, &cobra.Command{})\n\tif err == nil {\n\t\tpath = plugin.Path\n\t}\n\tif errdefs.IsNotFound(err) {\n\t\tpath, err = exec.LookPath(executable(provider))\n\t}\n\treturn path, err\n}\n\nfunc (s *composeService) setupPluginCommand(ctx context.Context, project *types.Project, service types.ServiceConfig, path, command string) (*exec.Cmd, error) {\n\tcmdOptionsMetadata := s.getPluginMetadata(path, service.Provider.Type, project)\n\tvar currentCommandMetadata CommandMetadata\n\tswitch command {\n\tcase \"up\":\n\t\tcurrentCommandMetadata = cmdOptionsMetadata.Up\n\tcase \"down\":\n\t\tcurrentCommandMetadata = cmdOptionsMetadata.Down\n\t}\n\n\tprovider := *service.Provider\n\tcommandMetadataIsEmpty := cmdOptionsMetadata.IsEmpty()\n\tif err := currentCommandMetadata.CheckRequiredParameters(provider); !commandMetadataIsEmpty && err != nil {\n\t\treturn nil, err\n\t}\n\n\targs := []string{\"compose\", fmt.Sprintf(\"--project-name=%s\", project.Name), command}\n\tfor k, v := range provider.Options {\n\t\tfor _, value := range v {\n\t\t\tif _, ok := currentCommandMetadata.GetParameter(k); commandMetadataIsEmpty || ok {\n\t\t\t\targs = append(args, fmt.Sprintf(\"--%s=%s\", k, value))\n\t\t\t}\n\t\t}\n\t}\n\targs = append(args, service.Name)\n\n\tcmd := exec.CommandContext(ctx, path, args...)\n\n\terr := s.prepareShellOut(ctx, project.Environment, cmd)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn cmd, nil\n}\n\nfunc (s *composeService) getPluginMetadata(path, command string, project *types.Project) ProviderMetadata {\n\tcmd := exec.Command(path, \"compose\", \"metadata\")\n\terr := s.prepareShellOut(context.Background(), project.Environment, cmd)\n\tif err != nil {\n\t\tlogrus.Debugf(\"failed to prepare plugin metadata command: %v\", err)\n\t\treturn ProviderMetadata{}\n\t}\n\tstdout := &bytes.Buffer{}\n\tcmd.Stdout = stdout\n\n\tif err := cmd.Run(); err != nil {\n\t\tlogrus.Debugf(\"failed to start plugin metadata command: %v\", err)\n\t\treturn ProviderMetadata{}\n\t}\n\n\tvar metadata ProviderMetadata\n\tif err := json.Unmarshal(stdout.Bytes(), &metadata); err != nil {\n\t\toutput, _ := io.ReadAll(stdout)\n\t\tlogrus.Debugf(\"failed to decode plugin metadata: %v - %s\", err, output)\n\t\treturn ProviderMetadata{}\n\t}\n\t// Save metadata into docker home directory to be used by Docker LSP tool\n\t// Just log the error as it's not a critical error for the main flow\n\tmetadataDir := filepath.Join(config.Dir(), providerMetadataDirectory)\n\tif err := os.MkdirAll(metadataDir, 0o700); err == nil {\n\t\tmetadataFilePath := filepath.Join(metadataDir, command+\".json\")\n\t\tif err := os.WriteFile(metadataFilePath, stdout.Bytes(), 0o600); err != nil {\n\t\t\tlogrus.Debugf(\"failed to save plugin metadata: %v\", err)\n\t\t}\n\t} else {\n\t\tlogrus.Debugf(\"failed to create plugin metadata directory: %v\", err)\n\t}\n\treturn metadata\n}\n\ntype ProviderMetadata struct {\n\tDescription string          `json:\"description\"`\n\tUp          CommandMetadata `json:\"up\"`\n\tDown        CommandMetadata `json:\"down\"`\n}\n\nfunc (p ProviderMetadata) IsEmpty() bool {\n\treturn p.Description == \"\" && p.Up.Parameters == nil && p.Down.Parameters == nil\n}\n\ntype CommandMetadata struct {\n\tParameters []ParameterMetadata `json:\"parameters\"`\n}\n\ntype ParameterMetadata struct {\n\tName        string `json:\"name\"`\n\tDescription string `json:\"description\"`\n\tRequired    bool   `json:\"required\"`\n\tType        string `json:\"type\"`\n\tDefault     string `json:\"default,omitempty\"`\n}\n\nfunc (c CommandMetadata) GetParameter(paramName string) (ParameterMetadata, bool) {\n\tfor _, p := range c.Parameters {\n\t\tif p.Name == paramName {\n\t\t\treturn p, true\n\t\t}\n\t}\n\treturn ParameterMetadata{}, false\n}\n\nfunc (c CommandMetadata) CheckRequiredParameters(provider types.ServiceProviderConfig) error {\n\tfor _, p := range c.Parameters {\n\t\tif p.Required {\n\t\t\tif _, ok := provider.Options[p.Name]; !ok {\n\t\t\t\treturn fmt.Errorf(\"required parameter %q is missing from provider %q definition\", p.Name, provider.Type)\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "pkg/compose/plugins_windows.go",
    "content": "//go:build windows\n\n/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nfunc executable(s string) string {\n\treturn s + \".exe\"\n}\n"
  },
  {
    "path": "pkg/compose/port.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/moby/moby/api/types/container\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Port(ctx context.Context, projectName string, service string, port uint16, options api.PortOptions) (string, int, error) {\n\tprojectName = strings.ToLower(projectName)\n\tctr, err := s.getSpecifiedContainer(ctx, projectName, oneOffInclude, false, service, options.Index)\n\tif err != nil {\n\t\treturn \"\", 0, err\n\t}\n\tfor _, p := range ctr.Ports {\n\t\tif p.PrivatePort == port && p.Type == options.Protocol {\n\t\t\treturn p.IP.String(), int(p.PublicPort), nil\n\t\t}\n\t}\n\treturn \"\", 0, portNotFoundError(options.Protocol, port, ctr)\n}\n\nfunc portNotFoundError(protocol string, port uint16, ctr container.Summary) error {\n\tformatPort := func(protocol string, port uint16) string {\n\t\treturn fmt.Sprintf(\"%d/%s\", port, protocol)\n\t}\n\n\tvar containerPorts []string\n\tfor _, p := range ctr.Ports {\n\t\tcontainerPorts = append(containerPorts, formatPort(p.Type, p.PublicPort))\n\t}\n\n\tname := strings.TrimPrefix(ctr.Names[0], \"/\")\n\treturn fmt.Errorf(\"no port %s for container %s: %s\", formatPort(protocol, port), name, strings.Join(containerPorts, \", \"))\n}\n"
  },
  {
    "path": "pkg/compose/printer.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\n// logPrinter watch application containers and collect their logs\ntype logPrinter interface {\n\tHandleEvent(event api.ContainerEvent)\n}\n\ntype printer struct {\n\tconsumer api.LogConsumer\n}\n\n// newLogPrinter builds a LogPrinter passing containers logs to LogConsumer\nfunc newLogPrinter(consumer api.LogConsumer) logPrinter {\n\tprinter := printer{\n\t\tconsumer: consumer,\n\t}\n\treturn &printer\n}\n\nfunc (p *printer) HandleEvent(event api.ContainerEvent) {\n\tswitch event.Type {\n\tcase api.ContainerEventExited:\n\t\tif event.Restarting {\n\t\t\tp.consumer.Status(event.Source, fmt.Sprintf(\"exited with code %d (restarting)\", event.ExitCode))\n\t\t} else {\n\t\t\tp.consumer.Status(event.Source, fmt.Sprintf(\"exited with code %d\", event.ExitCode))\n\t\t}\n\tcase api.ContainerEventRecreated:\n\t\tp.consumer.Status(event.Container.Labels[api.ContainerReplaceLabel], \"has been recreated\")\n\tcase api.ContainerEventLog, api.HookEventLog:\n\t\tp.consumer.Log(event.Source, event.Line)\n\tcase api.ContainerEventErr:\n\t\tp.consumer.Err(event.Source, event.Line)\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/progress.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\ntype progressFunc func(context.Context) error\n\nfunc Run(ctx context.Context, pf progressFunc, operation string, bus api.EventProcessor) error {\n\tbus.Start(ctx, operation)\n\terr := pf(ctx)\n\tbus.Done(operation, err != nil)\n\treturn err\n}\n\n// errorEvent creates a new Error Resource with message\nfunc errorEvent(id string, msg string) api.Resource {\n\treturn api.Resource{\n\t\tID:      id,\n\t\tStatus:  api.Error,\n\t\tText:    api.StatusError,\n\t\tDetails: msg,\n\t}\n}\n\n// errorEventf creates a new Error Resource with format message\nfunc errorEventf(id string, msg string, args ...any) api.Resource {\n\treturn errorEvent(id, fmt.Sprintf(msg, args...))\n}\n\n// creatingEvent creates a new Create in progress Resource\nfunc creatingEvent(id string) api.Resource {\n\treturn newEvent(id, api.Working, api.StatusCreating)\n}\n\n// startingEvent creates a new Starting in progress Resource\nfunc startingEvent(id string) api.Resource {\n\treturn newEvent(id, api.Working, api.StatusStarting)\n}\n\n// startedEvent creates a new Started in progress Resource\nfunc startedEvent(id string) api.Resource {\n\treturn newEvent(id, api.Done, api.StatusStarted)\n}\n\n// waiting creates a new waiting event\nfunc waiting(id string) api.Resource {\n\treturn newEvent(id, api.Working, api.StatusWaiting)\n}\n\n// healthy creates a new healthy event\nfunc healthy(id string) api.Resource {\n\treturn newEvent(id, api.Done, api.StatusHealthy)\n}\n\n// exited creates a new exited event\nfunc exited(id string) api.Resource {\n\treturn newEvent(id, api.Done, api.StatusExited)\n}\n\n// restartingEvent creates a new Restarting in progress Resource\nfunc restartingEvent(id string) api.Resource {\n\treturn newEvent(id, api.Working, api.StatusRestarting)\n}\n\n// runningEvent creates a new Running in progress Resource\nfunc runningEvent(id string) api.Resource {\n\treturn newEvent(id, api.Done, api.StatusRunning)\n}\n\n// createdEvent creates a new Created (done) Resource\nfunc createdEvent(id string) api.Resource {\n\treturn newEvent(id, api.Done, api.StatusCreated)\n}\n\n// stoppingEvent creates a new Stopping in progress Resource\nfunc stoppingEvent(id string) api.Resource {\n\treturn newEvent(id, api.Working, api.StatusStopping)\n}\n\n// stoppedEvent creates a new Stopping in progress Resource\nfunc stoppedEvent(id string) api.Resource {\n\treturn newEvent(id, api.Done, api.StatusStopped)\n}\n\n// killingEvent creates a new Killing in progress Resource\nfunc killingEvent(id string) api.Resource {\n\treturn newEvent(id, api.Working, api.StatusKilling)\n}\n\n// killedEvent creates a new Killed in progress Resource\nfunc killedEvent(id string) api.Resource {\n\treturn newEvent(id, api.Done, api.StatusKilled)\n}\n\n// removingEvent creates a new Removing in progress Resource\nfunc removingEvent(id string) api.Resource {\n\treturn newEvent(id, api.Working, api.StatusRemoving)\n}\n\n// removedEvent creates a new removed (done) Resource\nfunc removedEvent(id string) api.Resource {\n\treturn newEvent(id, api.Done, api.StatusRemoved)\n}\n\n// buildingEvent creates a new Building in progress Resource\nfunc buildingEvent(id string) api.Resource {\n\treturn newEvent(\"Image \"+id, api.Working, api.StatusBuilding)\n}\n\n// builtEvent creates a new built (done) Resource\nfunc builtEvent(id string) api.Resource {\n\treturn newEvent(\"Image \"+id, api.Done, api.StatusBuilt)\n}\n\n// pullingEvent creates a new pulling (in progress) Resource\nfunc pullingEvent(id string) api.Resource {\n\treturn newEvent(\"Image \"+id, api.Working, api.StatusPulling)\n}\n\n// pulledEvent creates a new pulled (done) Resource\nfunc pulledEvent(id string) api.Resource {\n\treturn newEvent(\"Image \"+id, api.Done, api.StatusPulled)\n}\n\n// skippedEvent creates a new Skipped Resource\nfunc skippedEvent(id string, reason string) api.Resource {\n\treturn api.Resource{\n\t\tID:     id,\n\t\tStatus: api.Warning,\n\t\tText:   \"Skipped: \" + reason,\n\t}\n}\n\n// newEvent new event\nfunc newEvent(id string, status api.EventStatus, text string, reason ...string) api.Resource {\n\tr := api.Resource{\n\t\tID:     id,\n\t\tStatus: status,\n\t\tText:   text,\n\t}\n\tif len(reason) > 0 {\n\t\tr.Details = reason[0]\n\t}\n\treturn r\n}\n\ntype ignore struct{}\n\nfunc (q *ignore) Start(_ context.Context, _ string) {\n}\n\nfunc (q *ignore) Done(_ string, _ bool) {\n}\n\nfunc (q *ignore) On(_ ...api.Resource) {\n}\n"
  },
  {
    "path": "pkg/compose/ps.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"sort\"\n\t\"strings\"\n\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\n//nolint:gocyclo\nfunc (s *composeService) Ps(ctx context.Context, projectName string, options api.PsOptions) ([]api.ContainerSummary, error) {\n\tprojectName = strings.ToLower(projectName)\n\toneOff := oneOffExclude\n\tif options.All {\n\t\toneOff = oneOffInclude\n\t}\n\tcontainers, err := s.getContainers(ctx, projectName, oneOff, options.All, options.Services...)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif len(options.Services) != 0 {\n\t\tcontainers = containers.filter(isService(options.Services...))\n\t}\n\tsummary := make([]api.ContainerSummary, len(containers))\n\teg, ctx := errgroup.WithContext(ctx)\n\tfor i, ctr := range containers {\n\t\teg.Go(func() error {\n\t\t\tpublishers := make([]api.PortPublisher, len(ctr.Ports))\n\t\t\tsort.Slice(ctr.Ports, func(i, j int) bool {\n\t\t\t\treturn ctr.Ports[i].PrivatePort < ctr.Ports[j].PrivatePort\n\t\t\t})\n\t\t\tfor i, p := range ctr.Ports {\n\t\t\t\tvar url string\n\t\t\t\tif p.IP.IsValid() {\n\t\t\t\t\turl = p.IP.String()\n\t\t\t\t}\n\t\t\t\tpublishers[i] = api.PortPublisher{\n\t\t\t\t\tURL:           url, // TODO(thaJeztah); change this to a netip.Addr ??\n\t\t\t\t\tTargetPort:    int(p.PrivatePort),\n\t\t\t\t\tPublishedPort: int(p.PublicPort),\n\t\t\t\t\tProtocol:      p.Type,\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tinspect, err := s.apiClient().ContainerInspect(ctx, ctr.ID, client.ContainerInspectOptions{})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tvar (\n\t\t\t\thealth   container.HealthStatus\n\t\t\t\texitCode int\n\t\t\t)\n\t\t\tif inspect.Container.State != nil {\n\t\t\t\tswitch inspect.Container.State.Status {\n\t\t\t\tcase container.StateRunning:\n\t\t\t\t\tif inspect.Container.State.Health != nil {\n\t\t\t\t\t\thealth = inspect.Container.State.Health.Status\n\t\t\t\t\t}\n\t\t\t\tcase container.StateExited, container.StateDead:\n\t\t\t\t\texitCode = inspect.Container.State.ExitCode\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tvar (\n\t\t\t\tlocal  int\n\t\t\t\tmounts []string\n\t\t\t)\n\t\t\tfor _, m := range ctr.Mounts {\n\t\t\t\tname := m.Name\n\t\t\t\tif name == \"\" {\n\t\t\t\t\tname = m.Source\n\t\t\t\t}\n\t\t\t\tif m.Driver == \"local\" {\n\t\t\t\t\tlocal++\n\t\t\t\t}\n\t\t\t\tmounts = append(mounts, name)\n\t\t\t}\n\n\t\t\tvar networks []string\n\t\t\tif ctr.NetworkSettings != nil {\n\t\t\t\tfor k := range ctr.NetworkSettings.Networks {\n\t\t\t\t\tnetworks = append(networks, k)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tsummary[i] = api.ContainerSummary{\n\t\t\t\tID:           ctr.ID,\n\t\t\t\tName:         getCanonicalContainerName(ctr),\n\t\t\t\tNames:        ctr.Names,\n\t\t\t\tImage:        ctr.Image,\n\t\t\t\tProject:      ctr.Labels[api.ProjectLabel],\n\t\t\t\tService:      ctr.Labels[api.ServiceLabel],\n\t\t\t\tCommand:      ctr.Command,\n\t\t\t\tState:        ctr.State,\n\t\t\t\tStatus:       ctr.Status,\n\t\t\t\tCreated:      ctr.Created,\n\t\t\t\tLabels:       ctr.Labels,\n\t\t\t\tSizeRw:       ctr.SizeRw,\n\t\t\t\tSizeRootFs:   ctr.SizeRootFs,\n\t\t\t\tMounts:       mounts,\n\t\t\t\tLocalVolumes: local,\n\t\t\t\tNetworks:     networks,\n\t\t\t\tHealth:       health,\n\t\t\t\tExitCode:     exitCode,\n\t\t\t\tPublishers:   publishers,\n\t\t\t}\n\t\t\treturn nil\n\t\t})\n\t}\n\treturn summary, eg.Wait()\n}\n"
  },
  {
    "path": "pkg/compose/ps_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"net/netip\"\n\t\"strings\"\n\t\"testing\"\n\n\tcontainerType \"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"go.uber.org/mock/gomock\"\n\t\"gotest.tools/v3/assert\"\n\n\tcompose \"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc TestPs(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\n\tlistOpts := client.ContainerListOptions{\n\t\tFilters: projectFilter(strings.ToLower(testProject)).Add(\"label\", hasConfigHashLabel(), oneOffFilter(false)),\n\t\tAll:     false,\n\t}\n\tc1, inspect1 := containerDetails(\"service1\", \"123\", containerType.StateRunning, containerType.Healthy, 0)\n\tc2, inspect2 := containerDetails(\"service1\", \"456\", containerType.StateRunning, \"\", 0)\n\tc2.Ports = []containerType.PortSummary{{PublicPort: 80, PrivatePort: 90, IP: netip.MustParseAddr(\"127.0.0.1\")}}\n\tc3, inspect3 := containerDetails(\"service2\", \"789\", containerType.StateExited, \"\", 130)\n\tapi.EXPECT().ContainerList(t.Context(), listOpts).Return(client.ContainerListResult{\n\t\tItems: []containerType.Summary{c1, c2, c3},\n\t}, nil)\n\tapi.EXPECT().ContainerInspect(anyCancellableContext(), \"123\", gomock.Any()).Return(client.ContainerInspectResult{Container: inspect1}, nil)\n\tapi.EXPECT().ContainerInspect(anyCancellableContext(), \"456\", gomock.Any()).Return(client.ContainerInspectResult{Container: inspect2}, nil)\n\tapi.EXPECT().ContainerInspect(anyCancellableContext(), \"789\", gomock.Any()).Return(client.ContainerInspectResult{Container: inspect3}, nil)\n\n\tcontainers, err := tested.Ps(t.Context(), strings.ToLower(testProject), compose.PsOptions{})\n\n\texpected := []compose.ContainerSummary{\n\t\t{\n\t\t\tID: \"123\", Name: \"123\", Names: []string{\"/123\"}, Image: \"foo\", Project: strings.ToLower(testProject), Service: \"service1\",\n\t\t\tState:      containerType.StateRunning,\n\t\t\tHealth:     containerType.Healthy,\n\t\t\tPublishers: []compose.PortPublisher{},\n\t\t\tLabels: map[string]string{\n\t\t\t\tcompose.ProjectLabel:     strings.ToLower(testProject),\n\t\t\t\tcompose.ConfigFilesLabel: \"/src/pkg/compose/testdata/compose.yaml\",\n\t\t\t\tcompose.WorkingDirLabel:  \"/src/pkg/compose/testdata\",\n\t\t\t\tcompose.ServiceLabel:     \"service1\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tID: \"456\", Name: \"456\", Names: []string{\"/456\"}, Image: \"foo\", Project: strings.ToLower(testProject), Service: \"service1\",\n\t\t\tState:      containerType.StateRunning,\n\t\t\tPublishers: []compose.PortPublisher{{URL: \"127.0.0.1\", TargetPort: 90, PublishedPort: 80}},\n\t\t\tLabels: map[string]string{\n\t\t\t\tcompose.ProjectLabel:     strings.ToLower(testProject),\n\t\t\t\tcompose.ConfigFilesLabel: \"/src/pkg/compose/testdata/compose.yaml\",\n\t\t\t\tcompose.WorkingDirLabel:  \"/src/pkg/compose/testdata\",\n\t\t\t\tcompose.ServiceLabel:     \"service1\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tID: \"789\", Name: \"789\", Names: []string{\"/789\"}, Image: \"foo\", Project: strings.ToLower(testProject), Service: \"service2\",\n\t\t\tState:      containerType.StateExited,\n\t\t\tExitCode:   130,\n\t\t\tPublishers: []compose.PortPublisher{},\n\t\t\tLabels: map[string]string{\n\t\t\t\tcompose.ProjectLabel:     strings.ToLower(testProject),\n\t\t\t\tcompose.ConfigFilesLabel: \"/src/pkg/compose/testdata/compose.yaml\",\n\t\t\t\tcompose.WorkingDirLabel:  \"/src/pkg/compose/testdata\",\n\t\t\t\tcompose.ServiceLabel:     \"service2\",\n\t\t\t},\n\t\t},\n\t}\n\tassert.NilError(t, err)\n\tassert.DeepEqual(t, containers, expected)\n}\n\nfunc containerDetails(service string, id string, status containerType.ContainerState, health containerType.HealthStatus, exitCode int) (containerType.Summary, containerType.InspectResponse) {\n\tctr := containerType.Summary{\n\t\tID:     id,\n\t\tNames:  []string{\"/\" + id},\n\t\tImage:  \"foo\",\n\t\tLabels: containerLabels(service, false),\n\t\tState:  status,\n\t}\n\tinspect := containerType.InspectResponse{\n\t\tState: &containerType.State{\n\t\t\tStatus:   status,\n\t\t\tHealth:   &containerType.Health{Status: health},\n\t\t\tExitCode: exitCode,\n\t\t},\n\t}\n\treturn ctr, inspect\n}\n"
  },
  {
    "path": "pkg/compose/publish.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/sha256\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"strings\"\n\n\t\"github.com/DefangLabs/secret-detector/pkg/scanner\"\n\t\"github.com/DefangLabs/secret-detector/pkg/secrets\"\n\t\"github.com/compose-spec/compose-go/v2/loader\"\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/distribution/reference\"\n\t\"github.com/opencontainers/go-digest\"\n\t\"github.com/opencontainers/image-spec/specs-go\"\n\tv1 \"github.com/opencontainers/image-spec/specs-go/v1\"\n\t\"github.com/sirupsen/logrus\"\n\n\t\"github.com/docker/compose/v5/internal/oci\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/compose/transform\"\n)\n\nfunc (s *composeService) Publish(ctx context.Context, project *types.Project, repository string, options api.PublishOptions) error {\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.publish(ctx, project, repository, options)\n\t}, \"publish\", s.events)\n}\n\n//nolint:gocyclo\nfunc (s *composeService) publish(ctx context.Context, project *types.Project, repository string, options api.PublishOptions) error {\n\tproject, err := project.WithProfiles([]string{\"*\"})\n\tif err != nil {\n\t\treturn err\n\t}\n\taccept, err := s.preChecks(project, options)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif !accept {\n\t\treturn nil\n\t}\n\terr = s.Push(ctx, project, api.PushOptions{IgnoreFailures: true, ImageMandatory: true})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tlayers, err := s.createLayers(ctx, project, options)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\ts.events.On(api.Resource{\n\t\tID:     repository,\n\t\tText:   \"publishing\",\n\t\tStatus: api.Working,\n\t})\n\tif logrus.IsLevelEnabled(logrus.DebugLevel) {\n\t\tlogrus.Debug(\"publishing layers\")\n\t\tfor _, layer := range layers {\n\t\t\tindent, _ := json.MarshalIndent(layer, \"\", \"  \")\n\t\t\tfmt.Println(string(indent))\n\t\t}\n\t}\n\tif !s.dryRun {\n\t\tnamed, err := reference.ParseDockerRef(repository)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tvar insecureRegistries []string\n\t\tif options.InsecureRegistry {\n\t\t\tinsecureRegistries = append(insecureRegistries, reference.Domain(named))\n\t\t}\n\n\t\tresolver := oci.NewResolver(s.configFile(), insecureRegistries...)\n\n\t\tdescriptor, err := oci.PushManifest(ctx, resolver, named, layers, options.OCIVersion)\n\t\tif err != nil {\n\t\t\ts.events.On(api.Resource{\n\t\t\t\tID:     repository,\n\t\t\t\tText:   \"publishing\",\n\t\t\t\tStatus: api.Error,\n\t\t\t})\n\t\t\treturn err\n\t\t}\n\n\t\tif options.Application {\n\t\t\tmanifests := []v1.Descriptor{}\n\t\t\tfor _, service := range project.Services {\n\t\t\t\tref, err := reference.ParseDockerRef(service.Image)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\n\t\t\t\tmanifest, err := oci.Copy(ctx, resolver, ref, named)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tmanifests = append(manifests, manifest)\n\t\t\t}\n\n\t\t\tdescriptor.Data = nil\n\t\t\tindex, err := json.Marshal(v1.Index{\n\t\t\t\tVersioned: specs.Versioned{SchemaVersion: 2},\n\t\t\t\tMediaType: v1.MediaTypeImageIndex,\n\t\t\t\tManifests: manifests,\n\t\t\t\tSubject:   &descriptor,\n\t\t\t\tAnnotations: map[string]string{\n\t\t\t\t\t\"com.docker.compose.version\": api.ComposeVersion,\n\t\t\t\t},\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\timagesDescriptor := v1.Descriptor{\n\t\t\t\tMediaType:    v1.MediaTypeImageIndex,\n\t\t\t\tArtifactType: oci.ComposeProjectArtifactType,\n\t\t\t\tDigest:       digest.FromString(string(index)),\n\t\t\t\tSize:         int64(len(index)),\n\t\t\t\tAnnotations: map[string]string{\n\t\t\t\t\t\"com.docker.compose.version\": api.ComposeVersion,\n\t\t\t\t},\n\t\t\t\tData: index,\n\t\t\t}\n\t\t\terr = oci.Push(ctx, resolver, reference.TrimNamed(named), imagesDescriptor)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\ts.events.On(api.Resource{\n\t\tID:     repository,\n\t\tText:   \"published\",\n\t\tStatus: api.Done,\n\t})\n\treturn nil\n}\n\nfunc (s *composeService) createLayers(ctx context.Context, project *types.Project, options api.PublishOptions) ([]v1.Descriptor, error) {\n\tvar layers []v1.Descriptor\n\textFiles := map[string]string{}\n\tenvFiles := map[string]string{}\n\tfor _, file := range project.ComposeFiles {\n\t\tdata, err := processFile(ctx, file, project, extFiles, envFiles)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tlayerDescriptor := oci.DescriptorForComposeFile(file, data)\n\t\tlayers = append(layers, layerDescriptor)\n\t}\n\n\textLayers, err := processExtends(ctx, project, extFiles)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tlayers = append(layers, extLayers...)\n\n\tif options.WithEnvironment {\n\t\tlayers = append(layers, envFileLayers(envFiles)...)\n\t}\n\n\tif options.ResolveImageDigests {\n\t\tyaml, err := s.generateImageDigestsOverride(ctx, project)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tlayerDescriptor := oci.DescriptorForComposeFile(\"image-digests.yaml\", yaml)\n\t\tlayers = append(layers, layerDescriptor)\n\t}\n\treturn layers, nil\n}\n\nfunc processExtends(ctx context.Context, project *types.Project, extFiles map[string]string) ([]v1.Descriptor, error) {\n\tvar layers []v1.Descriptor\n\tmoreExtFiles := map[string]string{}\n\tfor xf, hash := range extFiles {\n\t\tdata, err := processFile(ctx, xf, project, moreExtFiles, nil)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tlayerDescriptor := oci.DescriptorForComposeFile(hash, data)\n\t\tlayerDescriptor.Annotations[\"com.docker.compose.extends\"] = \"true\"\n\t\tlayers = append(layers, layerDescriptor)\n\t}\n\tfor f, hash := range moreExtFiles {\n\t\tif _, ok := extFiles[f]; ok {\n\t\t\tdelete(moreExtFiles, f)\n\t\t}\n\t\textFiles[f] = hash\n\t}\n\tif len(moreExtFiles) > 0 {\n\t\textLayers, err := processExtends(ctx, project, moreExtFiles)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tlayers = append(layers, extLayers...)\n\t}\n\treturn layers, nil\n}\n\nfunc processFile(ctx context.Context, file string, project *types.Project, extFiles map[string]string, envFiles map[string]string) ([]byte, error) {\n\tf, err := os.ReadFile(file)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tbase, err := loader.LoadWithContext(ctx, types.ConfigDetails{\n\t\tWorkingDir:  project.WorkingDir,\n\t\tEnvironment: project.Environment,\n\t\tConfigFiles: []types.ConfigFile{\n\t\t\t{\n\t\t\t\tFilename: file,\n\t\t\t\tContent:  f,\n\t\t\t},\n\t\t},\n\t}, func(options *loader.Options) {\n\t\toptions.SkipValidation = true\n\t\toptions.SkipExtends = true\n\t\toptions.SkipConsistencyCheck = true\n\t\toptions.ResolvePaths = true\n\t\toptions.SkipInclude = true\n\t\toptions.Profiles = project.Profiles\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tfor name, service := range base.Services {\n\t\tfor i, envFile := range service.EnvFiles {\n\t\t\thash := fmt.Sprintf(\"%x.env\", sha256.Sum256([]byte(envFile.Path)))\n\t\t\tenvFiles[envFile.Path] = hash\n\t\t\tf, err = transform.ReplaceEnvFile(f, name, i, hash)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\n\t\tif service.Extends == nil {\n\t\t\tcontinue\n\t\t}\n\t\txf := service.Extends.File\n\t\tif xf == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tif _, err = os.Stat(service.Extends.File); os.IsNotExist(err) {\n\t\t\t// No local file, while we loaded the project successfully: This is actually a remote resource\n\t\t\tcontinue\n\t\t}\n\n\t\thash := fmt.Sprintf(\"%x.yaml\", sha256.Sum256([]byte(xf)))\n\t\textFiles[xf] = hash\n\n\t\tf, err = transform.ReplaceExtendsFile(f, name, hash)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn f, nil\n}\n\nfunc (s *composeService) generateImageDigestsOverride(ctx context.Context, project *types.Project) ([]byte, error) {\n\tproject, err := project.WithImagesResolved(ImageDigestResolver(ctx, s.configFile(), s.apiClient()))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\toverride := types.Project{\n\t\tServices: types.Services{},\n\t}\n\tfor name, service := range project.Services {\n\t\toverride.Services[name] = types.ServiceConfig{\n\t\t\tImage: service.Image,\n\t\t}\n\t}\n\treturn override.MarshalYAML()\n}\n\nfunc (s *composeService) preChecks(project *types.Project, options api.PublishOptions) (bool, error) {\n\tif ok, err := s.checkOnlyBuildSection(project); !ok || err != nil {\n\t\treturn false, err\n\t}\n\tbindMounts := s.checkForBindMount(project)\n\tif len(bindMounts) > 0 {\n\t\tb := strings.Builder{}\n\t\tb.WriteString(\"you are about to publish bind mounts declaration within your OCI artifact.\\n\" +\n\t\t\t\"only the bind mount declarations will be added to the OCI artifact (not content)\\n\" +\n\t\t\t\"please double check that you are not mounting potential user's sensitive directories or data\\n\")\n\t\tfor key, val := range bindMounts {\n\t\t\tb.WriteString(key)\n\t\t\tfor _, v := range val {\n\t\t\t\tb.WriteString(v.String())\n\t\t\t\tb.WriteRune('\\n')\n\t\t\t}\n\t\t}\n\t\tb.WriteString(\"Are you ok to publish these bind mount declarations?\")\n\t\tconfirm, err := s.prompt(b.String(), false)\n\t\tif err != nil || !confirm {\n\t\t\treturn false, err\n\t\t}\n\t}\n\tdetectedSecrets, err := s.checkForSensitiveData(project)\n\tif err != nil {\n\t\treturn false, err\n\t}\n\tif len(detectedSecrets) > 0 {\n\t\tb := strings.Builder{}\n\t\tb.WriteString(\"you are about to publish sensitive data within your OCI artifact.\\n\" +\n\t\t\t\"please double check that you are not leaking sensitive data\\n\")\n\t\tfor _, val := range detectedSecrets {\n\t\t\tb.WriteString(val.Type)\n\t\t\tb.WriteRune('\\n')\n\t\t\tb.WriteString(fmt.Sprintf(\"%q: %s\\n\", val.Key, val.Value))\n\t\t}\n\t\tb.WriteString(\"Are you ok to publish these sensitive data?\")\n\t\tconfirm, err := s.prompt(b.String(), false)\n\t\tif err != nil || !confirm {\n\t\t\treturn false, err\n\t\t}\n\t}\n\terr = s.checkEnvironmentVariables(project, options)\n\tif err != nil {\n\t\treturn false, err\n\t}\n\treturn true, nil\n}\n\nfunc (s *composeService) checkEnvironmentVariables(project *types.Project, options api.PublishOptions) error {\n\terrorList := map[string][]string{}\n\n\tfor _, service := range project.Services {\n\t\tif len(service.EnvFiles) > 0 {\n\t\t\terrorList[service.Name] = append(errorList[service.Name], fmt.Sprintf(\"service %q has env_file declared.\", service.Name))\n\t\t}\n\t}\n\n\tif !options.WithEnvironment && len(errorList) > 0 {\n\t\terrorMsgSuffix := \"To avoid leaking sensitive data, you must either explicitly allow the sending of environment variables by using the --with-env flag,\\n\" +\n\t\t\t\"or remove sensitive data from your Compose configuration\"\n\t\tvar errorMsg strings.Builder\n\t\tfor _, errors := range errorList {\n\t\t\tfor _, err := range errors {\n\t\t\t\terrorMsg.WriteString(fmt.Sprintf(\"%s\\n\", err))\n\t\t\t}\n\t\t}\n\t\treturn fmt.Errorf(\"%s%s\", errorMsg.String(), errorMsgSuffix)\n\n\t}\n\treturn nil\n}\n\nfunc envFileLayers(files map[string]string) []v1.Descriptor {\n\tvar layers []v1.Descriptor\n\tfor file, hash := range files {\n\t\tf, err := os.ReadFile(file)\n\t\tif err != nil {\n\t\t\t// if we can't read the file, skip to the next one\n\t\t\tcontinue\n\t\t}\n\t\tlayerDescriptor := oci.DescriptorForEnvFile(hash, f)\n\t\tlayers = append(layers, layerDescriptor)\n\t}\n\treturn layers\n}\n\nfunc (s *composeService) checkOnlyBuildSection(project *types.Project) (bool, error) {\n\terrorList := []string{}\n\tfor _, service := range project.Services {\n\t\tif service.Image == \"\" && service.Build != nil {\n\t\t\terrorList = append(errorList, service.Name)\n\t\t}\n\t}\n\tif len(errorList) > 0 {\n\t\tvar errMsg strings.Builder\n\t\terrMsg.WriteString(\"your Compose stack cannot be published as it only contains a build section for service(s):\\n\")\n\t\tfor _, serviceInError := range errorList {\n\t\t\terrMsg.WriteString(fmt.Sprintf(\"- %q\\n\", serviceInError))\n\t\t}\n\t\treturn false, errors.New(errMsg.String())\n\t}\n\treturn true, nil\n}\n\nfunc (s *composeService) checkForBindMount(project *types.Project) map[string][]types.ServiceVolumeConfig {\n\tallFindings := map[string][]types.ServiceVolumeConfig{}\n\tfor serviceName, config := range project.Services {\n\t\tbindMounts := []types.ServiceVolumeConfig{}\n\t\tfor _, volume := range config.Volumes {\n\t\t\tif volume.Type == types.VolumeTypeBind {\n\t\t\t\tbindMounts = append(bindMounts, volume)\n\t\t\t}\n\t\t}\n\t\tif len(bindMounts) > 0 {\n\t\t\tallFindings[serviceName] = bindMounts\n\t\t}\n\t}\n\treturn allFindings\n}\n\nfunc (s *composeService) checkForSensitiveData(project *types.Project) ([]secrets.DetectedSecret, error) {\n\tvar allFindings []secrets.DetectedSecret\n\tscan := scanner.NewDefaultScanner()\n\t// Check all compose files\n\tfor _, file := range project.ComposeFiles {\n\t\tin, err := composeFileAsByteReader(file, project)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tfindings, err := scan.ScanReader(in)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to scan compose file %s: %w\", file, err)\n\t\t}\n\t\tallFindings = append(allFindings, findings...)\n\t}\n\tfor _, service := range project.Services {\n\t\t// Check env files\n\t\tfor _, envFile := range service.EnvFiles {\n\t\t\tfindings, err := scan.ScanFile(envFile.Path)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"failed to scan env file %s: %w\", envFile.Path, err)\n\t\t\t}\n\t\t\tallFindings = append(allFindings, findings...)\n\t\t}\n\t}\n\n\t// Check configs defined by files\n\tfor _, config := range project.Configs {\n\t\tif config.File != \"\" {\n\t\t\tfindings, err := scan.ScanFile(config.File)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"failed to scan config file %s: %w\", config.File, err)\n\t\t\t}\n\t\t\tallFindings = append(allFindings, findings...)\n\t\t}\n\t}\n\n\t// Check secrets defined by files\n\tfor _, secret := range project.Secrets {\n\t\tif secret.File != \"\" {\n\t\t\tfindings, err := scan.ScanFile(secret.File)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"failed to scan secret file %s: %w\", secret.File, err)\n\t\t\t}\n\t\t\tallFindings = append(allFindings, findings...)\n\t\t}\n\t}\n\n\treturn allFindings, nil\n}\n\nfunc composeFileAsByteReader(filePath string, project *types.Project) (io.Reader, error) {\n\tcomposeFile, err := os.ReadFile(filePath)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to open compose file %s: %w\", filePath, err)\n\t}\n\tbase, err := loader.LoadWithContext(context.TODO(), types.ConfigDetails{\n\t\tWorkingDir:  project.WorkingDir,\n\t\tEnvironment: project.Environment,\n\t\tConfigFiles: []types.ConfigFile{\n\t\t\t{\n\t\t\t\tFilename: filePath,\n\t\t\t\tContent:  composeFile,\n\t\t\t},\n\t\t},\n\t}, func(options *loader.Options) {\n\t\toptions.SkipValidation = true\n\t\toptions.SkipExtends = true\n\t\toptions.SkipConsistencyCheck = true\n\t\toptions.ResolvePaths = true\n\t\toptions.SkipInterpolation = true\n\t\toptions.SkipResolveEnvironment = true\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tin, err := base.MarshalYAML()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn bytes.NewBuffer(in), nil\n}\n"
  },
  {
    "path": "pkg/compose/publish_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"slices\"\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/loader\"\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/google/go-cmp/cmp\"\n\tv1 \"github.com/opencontainers/image-spec/specs-go/v1\"\n\t\"gotest.tools/v3/assert\"\n\n\t\"github.com/docker/compose/v5/internal\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc Test_createLayers(t *testing.T) {\n\tproject, err := loader.LoadWithContext(t.Context(), types.ConfigDetails{\n\t\tWorkingDir:  \"testdata/publish/\",\n\t\tEnvironment: types.Mapping{},\n\t\tConfigFiles: []types.ConfigFile{\n\t\t\t{\n\t\t\t\tFilename: \"testdata/publish/compose.yaml\",\n\t\t\t},\n\t\t},\n\t})\n\tassert.NilError(t, err)\n\tproject.ComposeFiles = []string{\"testdata/publish/compose.yaml\"}\n\n\tservice := &composeService{}\n\tlayers, err := service.createLayers(t.Context(), project, api.PublishOptions{\n\t\tWithEnvironment: true,\n\t})\n\tassert.NilError(t, err)\n\n\tpublished := string(layers[0].Data)\n\tassert.Equal(t, published, `name: test\nservices:\n  test:\n    extends:\n      file: f8f9ede3d201ec37d5a5e3a77bbadab79af26035e53135e19571f50d541d390c.yaml\n      service: foo\n\n  string:\n    image: test\n    env_file: 5efca9cdbac9f5394c6c2e2094b1b42661f988f57fcab165a0bf72b205451af3.env\n\n  list:\n    image: test\n    env_file:\n      - 5efca9cdbac9f5394c6c2e2094b1b42661f988f57fcab165a0bf72b205451af3.env\n\n  mapping:\n    image: test\n    env_file:\n      - path: 5efca9cdbac9f5394c6c2e2094b1b42661f988f57fcab165a0bf72b205451af3.env\n`)\n\n\texpectedLayers := []v1.Descriptor{\n\t\t{\n\t\t\tMediaType: \"application/vnd.docker.compose.file+yaml\",\n\t\t\tAnnotations: map[string]string{\n\t\t\t\t\"com.docker.compose.file\":    \"compose.yaml\",\n\t\t\t\t\"com.docker.compose.version\": internal.Version,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tMediaType: \"application/vnd.docker.compose.file+yaml\",\n\t\t\tAnnotations: map[string]string{\n\t\t\t\t\"com.docker.compose.extends\": \"true\",\n\t\t\t\t\"com.docker.compose.file\":    \"f8f9ede3d201ec37d5a5e3a77bbadab79af26035e53135e19571f50d541d390c\",\n\t\t\t\t\"com.docker.compose.version\": internal.Version,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tMediaType: \"application/vnd.docker.compose.envfile\",\n\t\t\tAnnotations: map[string]string{\n\t\t\t\t\"com.docker.compose.envfile\": \"5efca9cdbac9f5394c6c2e2094b1b42661f988f57fcab165a0bf72b205451af3\",\n\t\t\t\t\"com.docker.compose.version\": internal.Version,\n\t\t\t},\n\t\t},\n\t}\n\tassert.DeepEqual(t, expectedLayers, layers, cmp.FilterPath(func(path cmp.Path) bool {\n\t\treturn !slices.Contains([]string{\".Data\", \".Digest\", \".Size\"}, path.String())\n\t}, cmp.Ignore()))\n}\n"
  },
  {
    "path": "pkg/compose/pull.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/platforms\"\n\t\"github.com/distribution/reference\"\n\t\"github.com/docker/cli/cli/config/configfile\"\n\tclitypes \"github.com/docker/cli/cli/config/types\"\n\t\"github.com/docker/go-units\"\n\t\"github.com/moby/moby/api/types/jsonstream\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/opencontainers/go-digest\"\n\tocispec \"github.com/opencontainers/image-spec/specs-go/v1\"\n\t\"github.com/sirupsen/logrus\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/internal/registry\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Pull(ctx context.Context, project *types.Project, options api.PullOptions) error {\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.pull(ctx, project, options)\n\t}, \"pull\", s.events)\n}\n\nfunc (s *composeService) pull(ctx context.Context, project *types.Project, opts api.PullOptions) error { //nolint:gocyclo\n\timages, err := s.getLocalImagesDigests(ctx, project)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\teg, ctx := errgroup.WithContext(ctx)\n\teg.SetLimit(s.maxConcurrency)\n\n\tvar (\n\t\tmustBuild         []string\n\t\tpullErrors        = make([]error, len(project.Services))\n\t\timagesBeingPulled = map[string]string{}\n\t)\n\n\ti := 0\n\tfor name, service := range project.Services {\n\t\tif service.Image == \"\" {\n\t\t\ts.events.On(api.Resource{\n\t\t\t\tID:      name,\n\t\t\t\tStatus:  api.Done,\n\t\t\t\tText:    \"Skipped\",\n\t\t\t\tDetails: \"No image to be pulled\",\n\t\t\t})\n\t\t\tcontinue\n\t\t}\n\n\t\tswitch service.PullPolicy {\n\t\tcase types.PullPolicyNever, types.PullPolicyBuild:\n\t\t\ts.events.On(api.Resource{\n\t\t\t\tID:     \"Image \" + service.Image,\n\t\t\t\tStatus: api.Done,\n\t\t\t\tText:   \"Skipped\",\n\t\t\t})\n\t\t\tcontinue\n\t\tcase types.PullPolicyMissing, types.PullPolicyIfNotPresent:\n\t\t\tif imageAlreadyPresent(service.Image, images) {\n\t\t\t\ts.events.On(api.Resource{\n\t\t\t\t\tID:      \"Image \" + service.Image,\n\t\t\t\t\tStatus:  api.Done,\n\t\t\t\t\tText:    \"Skipped\",\n\t\t\t\t\tDetails: \"Image is already present locally\",\n\t\t\t\t})\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\tif service.Build != nil && opts.IgnoreBuildable {\n\t\t\ts.events.On(api.Resource{\n\t\t\t\tID:      \"Image \" + service.Image,\n\t\t\t\tStatus:  api.Done,\n\t\t\t\tText:    \"Skipped\",\n\t\t\t\tDetails: \"Image can be built\",\n\t\t\t})\n\t\t\tcontinue\n\t\t}\n\n\t\tif _, ok := imagesBeingPulled[service.Image]; ok {\n\t\t\tcontinue\n\t\t}\n\n\t\timagesBeingPulled[service.Image] = service.Name\n\n\t\tidx := i\n\t\teg.Go(func() error {\n\t\t\t_, err := s.pullServiceImage(ctx, service, opts.Quiet, project.Environment[\"DOCKER_DEFAULT_PLATFORM\"])\n\t\t\tif err != nil {\n\t\t\t\tpullErrors[idx] = err\n\t\t\t\tif service.Build != nil {\n\t\t\t\t\tmustBuild = append(mustBuild, service.Name)\n\t\t\t\t}\n\t\t\t\tif !opts.IgnoreFailures && service.Build == nil {\n\t\t\t\t\tif s.dryRun {\n\t\t\t\t\t\ts.events.On(errorEventf(\"Image \"+service.Image,\n\t\t\t\t\t\t\t\"error pulling image: %s\", service.Image))\n\t\t\t\t\t}\n\t\t\t\t\t// fail fast if image can't be pulled nor built\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn nil\n\t\t})\n\t\ti++\n\t}\n\n\terr = eg.Wait()\n\n\tif len(mustBuild) > 0 {\n\t\tlogrus.Warnf(\"WARNING: Some service image(s) must be built from source by running:\\n    docker compose build %s\", strings.Join(mustBuild, \" \"))\n\t}\n\n\tif err != nil {\n\t\treturn err\n\t}\n\tif opts.IgnoreFailures {\n\t\treturn nil\n\t}\n\treturn errors.Join(pullErrors...)\n}\n\nfunc imageAlreadyPresent(serviceImage string, localImages map[string]api.ImageSummary) bool {\n\tnormalizedImage, err := reference.ParseDockerRef(serviceImage)\n\tif err != nil {\n\t\treturn false\n\t}\n\tswitch refType := normalizedImage.(type) {\n\tcase reference.NamedTagged:\n\t\t_, ok := localImages[serviceImage]\n\t\treturn ok && refType.Tag() != \"latest\"\n\tdefault:\n\t\t_, ok := localImages[serviceImage]\n\t\treturn ok\n\t}\n}\n\nfunc getUnwrappedErrorMessage(err error) string {\n\tderr := errors.Unwrap(err)\n\tif derr != nil {\n\t\treturn getUnwrappedErrorMessage(derr)\n\t}\n\treturn err.Error()\n}\n\nfunc (s *composeService) pullServiceImage(ctx context.Context, service types.ServiceConfig, quietPull bool, defaultPlatform string) (string, error) {\n\tresource := \"Image \" + service.Image\n\ts.events.On(pullingEvent(service.Image))\n\tref, err := reference.ParseNormalizedNamed(service.Image)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tencodedAuth, err := encodedAuth(ref, s.configFile())\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tplatform := service.Platform\n\tif platform == \"\" {\n\t\tplatform = defaultPlatform\n\t}\n\n\tvar ociPlatforms []ocispec.Platform\n\tif platform != \"\" {\n\t\tp, err := platforms.Parse(platform)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\tociPlatforms = append(ociPlatforms, p)\n\t}\n\n\tstream, err := s.apiClient().ImagePull(ctx, service.Image, client.ImagePullOptions{\n\t\tRegistryAuth: encodedAuth,\n\t\tPlatforms:    ociPlatforms,\n\t})\n\n\tif ctx.Err() != nil {\n\t\ts.events.On(api.Resource{\n\t\t\tID:     resource,\n\t\t\tStatus: api.Warning,\n\t\t\tText:   \"Interrupted\",\n\t\t})\n\t\treturn \"\", nil\n\t}\n\n\t// check if has error and the service has a build section\n\t// then the status should be warning instead of error\n\tif err != nil && service.Build != nil {\n\t\ts.events.On(api.Resource{\n\t\t\tID:     resource,\n\t\t\tStatus: api.Warning,\n\t\t\tText:   getUnwrappedErrorMessage(err),\n\t\t})\n\t\treturn \"\", err\n\t}\n\n\tif err != nil {\n\t\ts.events.On(errorEvent(resource, getUnwrappedErrorMessage(err)))\n\t\treturn \"\", err\n\t}\n\n\tdec := json.NewDecoder(stream)\n\tfor {\n\t\tvar jm jsonstream.Message\n\t\tif err := dec.Decode(&jm); err != nil {\n\t\t\tif errors.Is(err, io.EOF) {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\treturn \"\", err\n\t\t}\n\t\tif jm.Error != nil {\n\t\t\treturn \"\", errors.New(jm.Error.Message)\n\t\t}\n\t\tif !quietPull {\n\t\t\ttoPullProgressEvent(resource, jm, s.events)\n\t\t}\n\t}\n\ts.events.On(pulledEvent(service.Image))\n\n\tinspected, err := s.apiClient().ImageInspect(ctx, service.Image)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn inspected.ID, nil\n}\n\n// ImageDigestResolver creates a func able to resolve image digest from a docker ref,\nfunc ImageDigestResolver(ctx context.Context, file *configfile.ConfigFile, apiClient client.APIClient) func(named reference.Named) (digest.Digest, error) {\n\treturn func(named reference.Named) (digest.Digest, error) {\n\t\tauth, err := encodedAuth(named, file)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\tinspect, err := apiClient.DistributionInspect(ctx, named.String(), client.DistributionInspectOptions{\n\t\t\tEncodedRegistryAuth: auth,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn \"\",\n\t\t\t\tfmt.Errorf(\"failed to resolve digest for %s: %w\", named.String(), err)\n\t\t}\n\t\treturn inspect.Descriptor.Digest, nil\n\t}\n}\n\ntype authProvider interface {\n\tGetAuthConfig(registryHostname string) (clitypes.AuthConfig, error)\n}\n\nfunc encodedAuth(ref reference.Named, configFile authProvider) (string, error) {\n\tauthConfig, err := configFile.GetAuthConfig(registry.GetAuthConfigKey(reference.Domain(ref)))\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tbuf, err := json.Marshal(authConfig)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn base64.URLEncoding.EncodeToString(buf), nil\n}\n\nfunc (s *composeService) pullRequiredImages(ctx context.Context, project *types.Project, images map[string]api.ImageSummary, quietPull bool) error {\n\tneedPull := map[string]types.ServiceConfig{}\n\tfor name, service := range project.Services {\n\t\tpull, err := mustPull(service, images)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif pull {\n\t\t\tneedPull[name] = service\n\t\t}\n\t\tfor i, vol := range service.Volumes {\n\t\t\tif vol.Type == types.VolumeTypeImage {\n\t\t\t\tif _, ok := images[vol.Source]; !ok {\n\t\t\t\t\t// Hack: create a fake ServiceConfig so we pull missing volume image\n\t\t\t\t\tn := fmt.Sprintf(\"%s:volume %d\", name, i)\n\t\t\t\t\tneedPull[n] = types.ServiceConfig{\n\t\t\t\t\t\tName:  n,\n\t\t\t\t\t\tImage: vol.Source,\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t}\n\tif len(needPull) == 0 {\n\t\treturn nil\n\t}\n\n\teg, ctx := errgroup.WithContext(ctx)\n\teg.SetLimit(s.maxConcurrency)\n\tpulledImages := map[string]api.ImageSummary{}\n\tvar mutex sync.Mutex\n\tfor name, service := range needPull {\n\t\teg.Go(func() error {\n\t\t\tid, err := s.pullServiceImage(ctx, service, quietPull, project.Environment[\"DOCKER_DEFAULT_PLATFORM\"])\n\t\t\tmutex.Lock()\n\t\t\tdefer mutex.Unlock()\n\t\t\tpulledImages[name] = api.ImageSummary{\n\t\t\t\tID:          id,\n\t\t\t\tRepository:  service.Image,\n\t\t\t\tLastTagTime: time.Now(),\n\t\t\t}\n\t\t\tif err != nil && isServiceImageToBuild(service, project.Services) {\n\t\t\t\t// image can be built, so we can ignore pull failure\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\treturn err\n\t\t})\n\t}\n\terr := eg.Wait()\n\tfor i, service := range needPull {\n\t\tif pulledImages[i].ID != \"\" {\n\t\t\timages[service.Image] = pulledImages[i]\n\t\t}\n\t}\n\treturn err\n}\n\nfunc mustPull(service types.ServiceConfig, images map[string]api.ImageSummary) (bool, error) {\n\tif service.Provider != nil {\n\t\treturn false, nil\n\t}\n\tif service.Image == \"\" {\n\t\treturn false, nil\n\t}\n\tpolicy, duration, err := service.GetPullPolicy()\n\tif err != nil {\n\t\treturn false, err\n\t}\n\tswitch policy {\n\tcase types.PullPolicyAlways:\n\t\t// force pull\n\t\treturn true, nil\n\tcase types.PullPolicyNever, types.PullPolicyBuild:\n\t\treturn false, nil\n\tcase types.PullPolicyRefresh:\n\t\timg, ok := images[service.Image]\n\t\tif !ok {\n\t\t\treturn true, nil\n\t\t}\n\t\treturn time.Now().After(img.LastTagTime.Add(duration)), nil\n\tdefault: // Pull if missing\n\t\t_, ok := images[service.Image]\n\t\treturn !ok, nil\n\t}\n}\n\nfunc isServiceImageToBuild(service types.ServiceConfig, services types.Services) bool {\n\tif service.Build != nil {\n\t\treturn true\n\t}\n\n\tif service.Image == \"\" {\n\t\t// N.B. this should be impossible as service must have either `build` or `image` (or both)\n\t\treturn false\n\t}\n\n\t// look through the other services to see if another has a build definition for the same\n\t// image name\n\tfor _, svc := range services {\n\t\tif svc.Image == service.Image && svc.Build != nil {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\nconst (\n\tPreparingPhase         = \"Preparing\"\n\tWaitingPhase           = \"waiting\"\n\tPullingFsPhase         = \"Pulling fs layer\"\n\tDownloadingPhase       = \"Downloading\"\n\tDownloadCompletePhase  = \"Download complete\"\n\tExtractingPhase        = \"Extracting\"\n\tVerifyingChecksumPhase = \"Verifying Checksum\"\n\tAlreadyExistsPhase     = \"Already exists\"\n\tPullCompletePhase      = \"Pull complete\"\n)\n\nfunc toPullProgressEvent(parent string, jm jsonstream.Message, events api.EventProcessor) {\n\tif jm.ID == \"\" || jm.Progress == nil {\n\t\treturn\n\t}\n\n\tvar (\n\t\tdetails string\n\t\ttotal   int64\n\t\tpercent int\n\t\tcurrent int64\n\t\tstatus  = api.Working\n\t)\n\n\tswitch jm.Status {\n\tcase PreparingPhase, WaitingPhase, PullingFsPhase:\n\t\tpercent = 0\n\tcase DownloadingPhase, ExtractingPhase, VerifyingChecksumPhase:\n\t\tif jm.Progress != nil {\n\t\t\tcurrent = jm.Progress.Current\n\t\t\ttotal = jm.Progress.Total\n\t\t\tif jm.Progress.Total > 0 {\n\t\t\t\tpercent = min(int(jm.Progress.Current*100/jm.Progress.Total), 100)\n\t\t\t}\n\t\t}\n\tcase DownloadCompletePhase, AlreadyExistsPhase, PullCompletePhase:\n\t\tstatus = api.Done\n\t\tpercent = 100\n\t}\n\n\tif strings.Contains(jm.Status, \"Image is up to date\") ||\n\t\tstrings.Contains(jm.Status, \"Downloaded newer image\") {\n\t\tstatus = api.Done\n\t\tpercent = 100\n\t}\n\n\tif jm.Error != nil {\n\t\tstatus = api.Error\n\t\tdetails = jm.Error.Message\n\t} else {\n\t\tdetails = units.HumanSize(float64(jm.Progress.Current))\n\t}\n\n\tevents.On(api.Resource{\n\t\tID:       jm.ID,\n\t\tParentID: parent,\n\t\tCurrent:  current,\n\t\tTotal:    total,\n\t\tPercent:  percent,\n\t\tStatus:   status,\n\t\tText:     jm.Status,\n\t\tDetails:  details,\n\t})\n}\n"
  },
  {
    "path": "pkg/compose/push.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/distribution/reference\"\n\t\"github.com/docker/go-units\"\n\t\"github.com/moby/moby/api/types/jsonstream\"\n\t\"github.com/moby/moby/client\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/internal/registry\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Push(ctx context.Context, project *types.Project, options api.PushOptions) error {\n\tif options.Quiet {\n\t\treturn s.push(ctx, project, options)\n\t}\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.push(ctx, project, options)\n\t}, \"push\", s.events)\n}\n\nfunc (s *composeService) push(ctx context.Context, project *types.Project, options api.PushOptions) error {\n\teg, ctx := errgroup.WithContext(ctx)\n\teg.SetLimit(s.maxConcurrency)\n\n\tfor _, service := range project.Services {\n\t\tif service.Build == nil || service.Image == \"\" {\n\t\t\tif options.ImageMandatory && service.Image == \"\" && service.Provider == nil {\n\t\t\t\treturn fmt.Errorf(\"%q attribute is mandatory to push an image for service %q\", \"service.image\", service.Name)\n\t\t\t}\n\t\t\ts.events.On(api.Resource{\n\t\t\t\tID:     service.Name,\n\t\t\t\tStatus: api.Done,\n\t\t\t\tText:   \"Skipped\",\n\t\t\t})\n\t\t\tcontinue\n\t\t}\n\t\ttags := []string{service.Image}\n\t\tif service.Build != nil {\n\t\t\ttags = append(tags, service.Build.Tags...)\n\t\t}\n\n\t\tfor _, tag := range tags {\n\t\t\teg.Go(func() error {\n\t\t\t\ts.events.On(newEvent(tag, api.Working, \"Pushing\"))\n\t\t\t\terr := s.pushServiceImage(ctx, tag, options.Quiet)\n\t\t\t\tif err != nil {\n\t\t\t\t\tif !options.IgnoreFailures {\n\t\t\t\t\t\ts.events.On(newEvent(tag, api.Error, err.Error()))\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\ts.events.On(newEvent(tag, api.Warning, err.Error()))\n\t\t\t\t} else {\n\t\t\t\t\ts.events.On(newEvent(tag, api.Done, \"Pushed\"))\n\t\t\t\t}\n\t\t\t\treturn nil\n\t\t\t})\n\t\t}\n\t}\n\treturn eg.Wait()\n}\n\nfunc (s *composeService) pushServiceImage(ctx context.Context, tag string, quietPush bool) error {\n\tref, err := reference.ParseNormalizedNamed(tag)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tauthConfig, err := s.configFile().GetAuthConfig(registry.GetAuthConfigKey(reference.Domain(ref)))\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tbuf, err := json.Marshal(authConfig)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tstream, err := s.apiClient().ImagePush(ctx, tag, client.ImagePushOptions{\n\t\tRegistryAuth: base64.URLEncoding.EncodeToString(buf),\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\tdec := json.NewDecoder(stream)\n\tfor {\n\t\tvar jm jsonstream.Message\n\t\tif err := dec.Decode(&jm); err != nil {\n\t\t\tif errors.Is(err, io.EOF) {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\treturn err\n\t\t}\n\t\tif jm.Error != nil {\n\t\t\treturn errors.New(jm.Error.Message)\n\t\t}\n\n\t\tif !quietPush {\n\t\t\ttoPushProgressEvent(tag, jm, s.events)\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc toPushProgressEvent(prefix string, jm jsonstream.Message, events api.EventProcessor) {\n\tif jm.ID == \"\" {\n\t\t// skipped\n\t\treturn\n\t}\n\tvar (\n\t\ttext    string\n\t\tstatus  = api.Working\n\t\ttotal   int64\n\t\tcurrent int64\n\t\tpercent int\n\t)\n\tif isDone(jm) {\n\t\tstatus = api.Done\n\t\tpercent = 100\n\t}\n\tif jm.Error != nil {\n\t\tstatus = api.Error\n\t\ttext = jm.Error.Message\n\t}\n\tif jm.Progress != nil {\n\t\ttext = progressText(jm.Progress)\n\t\tif jm.Progress.Total != 0 {\n\t\t\tcurrent = jm.Progress.Current\n\t\t\ttotal = jm.Progress.Total\n\t\t\tif jm.Progress.Total > 0 {\n\t\t\t\tpercent = min(int(jm.Progress.Current*100/jm.Progress.Total), 100)\n\t\t\t}\n\t\t}\n\t}\n\n\tevents.On(api.Resource{\n\t\tParentID: prefix,\n\t\tID:       jm.ID,\n\t\tText:     text,\n\t\tStatus:   status,\n\t\tCurrent:  current,\n\t\tTotal:    total,\n\t\tPercent:  percent,\n\t})\n}\n\nfunc isDone(msg jsonstream.Message) bool {\n\t// TODO there should be a better way to detect push is done than such a status message check\n\tswitch strings.ToLower(msg.Status) {\n\tcase \"pushed\", \"layer already exists\":\n\t\treturn true\n\tdefault:\n\t\treturn false\n\t}\n}\n\n// progressText is a minimal variant of [jsonmessage.JSONProgress.String()]\n//\n// [jsonmessage.JSONProgress.String()]: https://github.com/moby/moby/blob/v28.5.2/pkg/jsonmessage/jsonmessage.go#L54-L117\nfunc progressText(p *jsonstream.Progress) string {\n\tswitch {\n\tcase p.Current <= 0 && p.Total <= 0:\n\t\treturn \"\"\n\tcase p.Units == \"\": // no units, use bytes\n\t\tcurrent := units.HumanSize(float64(p.Current))\n\t\tif p.Total <= 0 || p.Total > p.Current {\n\t\t\t// remove total display if the reported current is wonky.\n\t\t\treturn fmt.Sprintf(\"%8v\", current)\n\t\t}\n\t\ttotal := units.HumanSize(float64(p.Total))\n\t\treturn fmt.Sprintf(\"%8v/%v\", current, total)\n\tdefault:\n\t\tif p.Total <= 0 || p.Total > p.Current {\n\t\t\t// remove total display if the reported current is wonky.\n\t\t\treturn fmt.Sprintf(\"%d %s\", p.Current, p.Units)\n\t\t}\n\t\treturn fmt.Sprintf(\"%d/%d %s\", p.Current, p.Total, p.Units)\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/remove.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Remove(ctx context.Context, projectName string, options api.RemoveOptions) error {\n\tprojectName = strings.ToLower(projectName)\n\n\tif options.Stop {\n\t\terr := s.Stop(ctx, projectName, api.StopOptions{\n\t\t\tServices: options.Services,\n\t\t\tProject:  options.Project,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tcontainers, err := s.getContainers(ctx, projectName, oneOffExclude, true, options.Services...)\n\tif err != nil {\n\t\tif api.IsNotFoundError(err) {\n\t\t\t_, _ = fmt.Fprintln(s.stderr(), \"No stopped containers\")\n\t\t\treturn nil\n\t\t}\n\t\treturn err\n\t}\n\n\tif options.Project != nil {\n\t\tcontainers = containers.filter(isService(options.Project.ServiceNames()...))\n\t}\n\n\tvar stoppedContainers Containers\n\tfor _, ctr := range containers {\n\t\t// We have to inspect containers, as State reported by getContainers suffers a race condition\n\t\tinspected, err := s.apiClient().ContainerInspect(ctx, ctr.ID, client.ContainerInspectOptions{})\n\t\tif api.IsNotFoundError(err) {\n\t\t\t// Already removed. Maybe configured with auto-remove\n\t\t\tcontinue\n\t\t}\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif !inspected.Container.State.Running || (options.Stop && s.dryRun) {\n\t\t\tstoppedContainers = append(stoppedContainers, ctr)\n\t\t}\n\t}\n\n\tvar names []string\n\tstoppedContainers.forEach(func(c container.Summary) {\n\t\tnames = append(names, getCanonicalContainerName(c))\n\t})\n\n\tif len(names) == 0 {\n\t\treturn api.ErrNoResources\n\t}\n\n\tmsg := fmt.Sprintf(\"Going to remove %s\", strings.Join(names, \", \"))\n\tif options.Force {\n\t\t_, _ = fmt.Fprintln(s.stdout(), msg)\n\t} else {\n\t\tconfirm, err := s.prompt(msg, false)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif !confirm {\n\t\t\treturn nil\n\t\t}\n\t}\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.remove(ctx, stoppedContainers, options)\n\t}, \"remove\", s.events)\n}\n\nfunc (s *composeService) remove(ctx context.Context, containers Containers, options api.RemoveOptions) error {\n\teg, ctx := errgroup.WithContext(ctx)\n\tfor _, ctr := range containers {\n\t\teg.Go(func() error {\n\t\t\teventName := getContainerProgressName(ctr)\n\t\t\ts.events.On(removingEvent(eventName))\n\t\t\t_, err := s.apiClient().ContainerRemove(ctx, ctr.ID, client.ContainerRemoveOptions{\n\t\t\t\tRemoveVolumes: options.Volumes,\n\t\t\t\tForce:         options.Force,\n\t\t\t})\n\t\t\tif err == nil {\n\t\t\t\ts.events.On(removedEvent(eventName))\n\t\t\t}\n\t\t\treturn err\n\t\t})\n\t}\n\treturn eg.Wait()\n}\n"
  },
  {
    "path": "pkg/compose/restart.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/moby/moby/client\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\nfunc (s *composeService) Restart(ctx context.Context, projectName string, options api.RestartOptions) error {\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.restart(ctx, strings.ToLower(projectName), options)\n\t}, \"restart\", s.events)\n}\n\nfunc (s *composeService) restart(ctx context.Context, projectName string, options api.RestartOptions) error { //nolint:gocyclo\n\tcontainers, err := s.getContainers(ctx, projectName, oneOffExclude, true)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject := options.Project\n\tif project == nil {\n\t\tproject, err = s.getProjectWithResources(ctx, containers, projectName)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif options.NoDeps {\n\t\tproject, err = project.WithSelectedServices(options.Services, types.IgnoreDependencies)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// ignore depends_on relations which are not impacted by restarting service or not required\n\tproject, err = project.WithServicesTransform(func(_ string, s types.ServiceConfig) (types.ServiceConfig, error) {\n\t\tfor name, r := range s.DependsOn {\n\t\t\tif !r.Restart {\n\t\t\t\tdelete(s.DependsOn, name)\n\t\t\t}\n\t\t}\n\t\treturn s, nil\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif len(options.Services) != 0 {\n\t\tproject, err = project.WithSelectedServices(options.Services, types.IncludeDependents)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn InDependencyOrder(ctx, project, func(c context.Context, service string) error {\n\t\tconfig := project.Services[service]\n\t\terr = s.waitDependencies(ctx, project, service, config.DependsOn, containers, 0)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\teg, ctx := errgroup.WithContext(ctx)\n\t\tfor _, ctr := range containers.filter(isService(service)) {\n\t\t\teg.Go(func() error {\n\t\t\t\tdef := project.Services[service]\n\t\t\t\tfor _, hook := range def.PreStop {\n\t\t\t\t\terr = s.runHook(ctx, ctr, def, hook, nil)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\teventName := getContainerProgressName(ctr)\n\t\t\t\ts.events.On(restartingEvent(eventName))\n\t\t\t\t_, err = s.apiClient().ContainerRestart(ctx, ctr.ID, client.ContainerRestartOptions{\n\t\t\t\t\tTimeout: utils.DurationSecondToInt(options.Timeout),\n\t\t\t\t})\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\ts.events.On(startedEvent(eventName))\n\t\t\t\tfor _, hook := range def.PostStart {\n\t\t\t\t\terr = s.runHook(ctx, ctr, def, hook, nil)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn nil\n\t\t\t})\n\t\t}\n\t\treturn eg.Wait()\n\t})\n}\n"
  },
  {
    "path": "pkg/compose/run.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/signal\"\n\t\"slices\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli\"\n\tcmd \"github.com/docker/cli/cli/command/container\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/events\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/moby/moby/client/pkg/stringid\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\ntype prepareRunResult struct {\n\tcontainerID string\n\tservice     types.ServiceConfig\n\tcreated     container.Summary\n}\n\nfunc (s *composeService) RunOneOffContainer(ctx context.Context, project *types.Project, opts api.RunOptions) (int, error) {\n\tresult, err := s.prepareRun(ctx, project, opts)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\n\t// remove cancellable context signal handler so we can forward signals to container without compose from exiting\n\tsignal.Reset()\n\n\tsigc := make(chan os.Signal, 128)\n\tsignal.Notify(sigc)\n\tgo cmd.ForwardAllSignals(ctx, s.apiClient(), result.containerID, sigc)\n\tdefer signal.Stop(sigc)\n\n\t// If the service has post_start hooks, set up a goroutine that waits for\n\t// the container to start and then executes them. This is needed because\n\t// cmd.RunStart both starts and attaches to the container in one call,\n\t// so we can't run hooks sequentially between start and attach.\n\tvar hookErrCh chan error\n\tif len(result.service.PostStart) > 0 {\n\t\thookErrCh = make(chan error, 1)\n\t\tgo func() {\n\t\t\thookErrCh <- s.runPostStartHooksOnEvent(ctx, result.containerID, result.service, result.created)\n\t\t}()\n\t}\n\n\terr = cmd.RunStart(ctx, s.dockerCli, &cmd.StartOptions{\n\t\tOpenStdin:  !opts.Detach && opts.Interactive,\n\t\tAttach:     !opts.Detach,\n\t\tContainers: []string{result.containerID},\n\t\tDetachKeys: s.configFile().DetachKeys,\n\t})\n\n\t// Wait for hooks to complete if they were started\n\tif hookErrCh != nil {\n\t\tif hookErr := <-hookErrCh; hookErr != nil && err == nil {\n\t\t\terr = hookErr\n\t\t}\n\t}\n\n\tvar stErr cli.StatusError\n\tif errors.As(err, &stErr) {\n\t\treturn stErr.StatusCode, nil\n\t}\n\treturn 0, err\n}\n\n// runPostStartHooksOnEvent listens for the container's start event and executes\n// post_start lifecycle hooks once the container is running.\nfunc (s *composeService) runPostStartHooksOnEvent(ctx context.Context, containerID string, service types.ServiceConfig, ctr container.Summary) error {\n\tevtCtx, cancel := context.WithCancel(ctx)\n\tdefer cancel()\n\n\tres := s.apiClient().Events(evtCtx, client.EventsListOptions{\n\t\tFilters: make(client.Filters).\n\t\t\tAdd(\"type\", \"container\").\n\t\t\tAdd(\"container\", containerID).\n\t\t\tAdd(\"event\", string(events.ActionStart)),\n\t})\n\n\t// Wait for the container start event\n\tselect {\n\tcase <-evtCtx.Done():\n\t\treturn evtCtx.Err()\n\tcase err := <-res.Err:\n\t\treturn err\n\tcase <-res.Messages:\n\t\t// Container started, run hooks\n\t}\n\n\tfor _, hook := range service.PostStart {\n\t\tif err := s.runHook(ctx, ctr, service, hook, nil); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (s *composeService) prepareRun(ctx context.Context, project *types.Project, opts api.RunOptions) (prepareRunResult, error) {\n\t// Temporary implementation of use_api_socket until we get actual support inside docker engine\n\tproject, err := s.useAPISocket(project)\n\tif err != nil {\n\t\treturn prepareRunResult{}, err\n\t}\n\n\terr = Run(ctx, func(ctx context.Context) error {\n\t\treturn s.startDependencies(ctx, project, opts)\n\t}, \"run\", s.events)\n\tif err != nil {\n\t\treturn prepareRunResult{}, err\n\t}\n\n\tservice, err := project.GetService(opts.Service)\n\tif err != nil {\n\t\treturn prepareRunResult{}, err\n\t}\n\n\tapplyRunOptions(project, &service, opts)\n\n\tif err := s.stdin().CheckTty(opts.Interactive, service.Tty); err != nil {\n\t\treturn prepareRunResult{}, err\n\t}\n\n\tslug := stringid.GenerateRandomID()\n\tif service.ContainerName == \"\" {\n\t\tservice.ContainerName = fmt.Sprintf(\"%[1]s%[4]s%[2]s%[4]srun%[4]s%[3]s\", project.Name, service.Name, stringid.TruncateID(slug), api.Separator)\n\t}\n\tone := 1\n\tservice.Scale = &one\n\tservice.Restart = \"\"\n\tif service.Deploy != nil {\n\t\tservice.Deploy.RestartPolicy = nil\n\t}\n\tservice.CustomLabels = service.CustomLabels.\n\t\tAdd(api.SlugLabel, slug).\n\t\tAdd(api.OneoffLabel, \"True\")\n\n\t// Only ensure image exists for the target service, dependencies were already handled by startDependencies\n\tbuildOpts := prepareBuildOptions(opts)\n\tif err := s.ensureImagesExists(ctx, project, buildOpts, opts.QuietPull); err != nil { // all dependencies already checked, but might miss service img\n\t\treturn prepareRunResult{}, err\n\t}\n\n\tobservedState, err := s.getContainers(ctx, project.Name, oneOffInclude, true)\n\tif err != nil {\n\t\treturn prepareRunResult{}, err\n\t}\n\n\tif !opts.NoDeps {\n\t\tif err := s.waitDependencies(ctx, project, service.Name, service.DependsOn, observedState, 0); err != nil {\n\t\t\treturn prepareRunResult{}, err\n\t\t}\n\t}\n\tcreateOpts := createOptions{\n\t\tAutoRemove:        opts.AutoRemove,\n\t\tAttachStdin:       opts.Interactive,\n\t\tUseNetworkAliases: opts.UseNetworkAliases,\n\t\tLabels:            mergeLabels(service.Labels, service.CustomLabels),\n\t}\n\n\terr = newConvergence(project.ServiceNames(), observedState, nil, nil, s).resolveServiceReferences(&service)\n\tif err != nil {\n\t\treturn prepareRunResult{}, err\n\t}\n\n\terr = s.ensureModels(ctx, project, opts.QuietPull)\n\tif err != nil {\n\t\treturn prepareRunResult{}, err\n\t}\n\n\tcreated, err := s.createContainer(ctx, project, service, service.ContainerName, -1, createOpts)\n\tif err != nil {\n\t\treturn prepareRunResult{}, err\n\t}\n\n\tinspect, err := s.apiClient().ContainerInspect(ctx, created.ID, client.ContainerInspectOptions{})\n\tif err != nil {\n\t\treturn prepareRunResult{}, err\n\t}\n\n\terr = s.injectSecrets(ctx, project, service, inspect.Container.ID)\n\tif err != nil {\n\t\treturn prepareRunResult{containerID: created.ID}, err\n\t}\n\n\terr = s.injectConfigs(ctx, project, service, inspect.Container.ID)\n\treturn prepareRunResult{\n\t\tcontainerID: created.ID,\n\t\tservice:     service,\n\t\tcreated:     created,\n\t}, err\n}\n\nfunc prepareBuildOptions(opts api.RunOptions) *api.BuildOptions {\n\tif opts.Build == nil {\n\t\treturn nil\n\t}\n\t// Create a copy of build options and restrict to only the target service\n\tbuildOptsCopy := *opts.Build\n\tbuildOptsCopy.Services = []string{opts.Service}\n\treturn &buildOptsCopy\n}\n\nfunc applyRunOptions(project *types.Project, service *types.ServiceConfig, opts api.RunOptions) {\n\tservice.Tty = opts.Tty\n\tservice.StdinOpen = opts.Interactive\n\tservice.ContainerName = opts.Name\n\n\tif len(opts.Command) > 0 {\n\t\tservice.Command = opts.Command\n\t}\n\tif opts.User != \"\" {\n\t\tservice.User = opts.User\n\t}\n\n\tif len(opts.CapAdd) > 0 {\n\t\tservice.CapAdd = append(service.CapAdd, opts.CapAdd...)\n\t\tservice.CapDrop = slices.DeleteFunc(service.CapDrop, func(e string) bool { return slices.Contains(opts.CapAdd, e) })\n\t}\n\tif len(opts.CapDrop) > 0 {\n\t\tservice.CapDrop = append(service.CapDrop, opts.CapDrop...)\n\t\tservice.CapAdd = slices.DeleteFunc(service.CapAdd, func(e string) bool { return slices.Contains(opts.CapDrop, e) })\n\t}\n\tif opts.WorkingDir != \"\" {\n\t\tservice.WorkingDir = opts.WorkingDir\n\t}\n\tif opts.Entrypoint != nil {\n\t\tservice.Entrypoint = opts.Entrypoint\n\t\tif len(opts.Command) == 0 {\n\t\t\tservice.Command = []string{}\n\t\t}\n\t}\n\tif len(opts.Environment) > 0 {\n\t\tcmdEnv := types.NewMappingWithEquals(opts.Environment)\n\t\tserviceOverrideEnv := cmdEnv.Resolve(func(s string) (string, bool) {\n\t\t\tv, ok := envResolver(project.Environment)(s)\n\t\t\treturn v, ok\n\t\t}).RemoveEmpty()\n\t\tif service.Environment == nil {\n\t\t\tservice.Environment = types.MappingWithEquals{}\n\t\t}\n\t\tservice.Environment.OverrideBy(serviceOverrideEnv)\n\t}\n\tfor k, v := range opts.Labels {\n\t\tservice.Labels = service.Labels.Add(k, v)\n\t}\n}\n\nfunc (s *composeService) startDependencies(ctx context.Context, project *types.Project, options api.RunOptions) error {\n\tproject = project.WithServicesDisabled(options.Service)\n\n\terr := s.Create(ctx, project, api.CreateOptions{\n\t\tBuild:         options.Build,\n\t\tIgnoreOrphans: options.IgnoreOrphans,\n\t\tRemoveOrphans: options.RemoveOrphans,\n\t\tQuietPull:     options.QuietPull,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif len(project.Services) > 0 {\n\t\treturn s.Start(ctx, project.Name, api.StartOptions{\n\t\t\tProject: project,\n\t\t})\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "pkg/compose/scale.go",
    "content": "/*\nCopyright 2020 Docker Compose CLI authors\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n\thttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\npackage compose\n\nimport (\n\t\"context\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\n\t\"github.com/docker/compose/v5/internal/tracing\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Scale(ctx context.Context, project *types.Project, options api.ScaleOptions) error {\n\treturn Run(ctx, tracing.SpanWrapFunc(\"project/scale\", tracing.ProjectOptions(ctx, project), func(ctx context.Context) error {\n\t\terr := s.create(ctx, project, api.CreateOptions{Services: options.Services})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn s.start(ctx, project.Name, api.StartOptions{Project: project, Services: options.Services}, nil)\n\t}), \"scale\", s.events)\n}\n"
  },
  {
    "path": "pkg/compose/secrets.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"archive/tar\"\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"time\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/moby/moby/client\"\n)\n\ntype mountType string\n\nconst (\n\tsecretMount mountType = \"secret\"\n\tconfigMount mountType = \"config\"\n)\n\nfunc (s *composeService) injectSecrets(ctx context.Context, project *types.Project, service types.ServiceConfig, id string) error {\n\treturn s.injectFileReferences(ctx, project, service, id, secretMount)\n}\n\nfunc (s *composeService) injectConfigs(ctx context.Context, project *types.Project, service types.ServiceConfig, id string) error {\n\treturn s.injectFileReferences(ctx, project, service, id, configMount)\n}\n\nfunc (s *composeService) injectFileReferences(ctx context.Context, project *types.Project, service types.ServiceConfig, id string, mountType mountType) error {\n\tmounts, sources := s.getFilesAndMap(project, service, mountType)\n\n\tfor _, mount := range mounts {\n\t\tcontent, err := s.resolveFileContent(project, sources[mount.Source], mountType)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif content == \"\" {\n\t\t\tcontinue\n\t\t}\n\n\t\tif service.ReadOnly {\n\t\t\treturn fmt.Errorf(\"cannot create %s %q in read-only service %s: `file` is the sole supported option\", mountType, sources[mount.Source].Name, service.Name)\n\t\t}\n\n\t\ts.setDefaultTarget(&mount, mountType)\n\n\t\tif err := s.copyFileToContainer(ctx, id, content, mount); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (s *composeService) getFilesAndMap(project *types.Project, service types.ServiceConfig, mountType mountType) ([]types.FileReferenceConfig, map[string]types.FileObjectConfig) {\n\tvar files []types.FileReferenceConfig\n\tvar fileMap map[string]types.FileObjectConfig\n\n\tswitch mountType {\n\tcase secretMount:\n\t\tfiles = make([]types.FileReferenceConfig, len(service.Secrets))\n\t\tfor i, config := range service.Secrets {\n\t\t\tfiles[i] = types.FileReferenceConfig(config)\n\t\t}\n\t\tfileMap = make(map[string]types.FileObjectConfig)\n\t\tfor k, v := range project.Secrets {\n\t\t\tfileMap[k] = types.FileObjectConfig(v)\n\t\t}\n\tcase configMount:\n\t\tfiles = make([]types.FileReferenceConfig, len(service.Configs))\n\t\tfor i, config := range service.Configs {\n\t\t\tfiles[i] = types.FileReferenceConfig(config)\n\t\t}\n\t\tfileMap = make(map[string]types.FileObjectConfig)\n\t\tfor k, v := range project.Configs {\n\t\t\tfileMap[k] = types.FileObjectConfig(v)\n\t\t}\n\t}\n\treturn files, fileMap\n}\n\nfunc (s *composeService) resolveFileContent(project *types.Project, source types.FileObjectConfig, mountType mountType) (string, error) {\n\tif source.Content != \"\" {\n\t\t// inlined, or already resolved by include\n\t\treturn source.Content, nil\n\t}\n\tif source.Environment != \"\" {\n\t\tenv, ok := project.Environment[source.Environment]\n\t\tif !ok {\n\t\t\treturn \"\", fmt.Errorf(\"environment variable %q required by %s %q is not set\", source.Environment, mountType, source.Name)\n\t\t}\n\t\treturn env, nil\n\t}\n\treturn \"\", nil\n}\n\nfunc (s *composeService) setDefaultTarget(file *types.FileReferenceConfig, mountType mountType) {\n\tif file.Target == \"\" {\n\t\tif mountType == secretMount {\n\t\t\tfile.Target = \"/run/secrets/\" + file.Source\n\t\t} else {\n\t\t\tfile.Target = \"/\" + file.Source\n\t\t}\n\t} else if mountType == secretMount && !isAbsTarget(file.Target) {\n\t\tfile.Target = \"/run/secrets/\" + file.Target\n\t}\n}\n\nfunc (s *composeService) copyFileToContainer(ctx context.Context, id, content string, file types.FileReferenceConfig) error {\n\tb, err := createTar(content, file)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t_, err = s.apiClient().CopyToContainer(ctx, id, client.CopyToContainerOptions{\n\t\tDestinationPath: \"/\",\n\t\tContent:         &b,\n\t\tCopyUIDGID:      file.UID != \"\" || file.GID != \"\",\n\t})\n\treturn err\n}\n\nfunc createTar(env string, config types.FileReferenceConfig) (bytes.Buffer, error) {\n\tvalue := []byte(env)\n\tb := bytes.Buffer{}\n\ttarWriter := tar.NewWriter(&b)\n\tmode := types.FileMode(0o444)\n\tif config.Mode != nil {\n\t\tmode = *config.Mode\n\t}\n\n\tvar uid, gid int\n\tif config.UID != \"\" {\n\t\tv, err := strconv.Atoi(config.UID)\n\t\tif err != nil {\n\t\t\treturn b, err\n\t\t}\n\t\tuid = v\n\t}\n\tif config.GID != \"\" {\n\t\tv, err := strconv.Atoi(config.GID)\n\t\tif err != nil {\n\t\t\treturn b, err\n\t\t}\n\t\tgid = v\n\t}\n\n\theader := &tar.Header{\n\t\tName:    config.Target,\n\t\tSize:    int64(len(value)),\n\t\tMode:    int64(mode),\n\t\tModTime: time.Now(),\n\t\tUid:     uid,\n\t\tGid:     gid,\n\t}\n\terr := tarWriter.WriteHeader(header)\n\tif err != nil {\n\t\treturn bytes.Buffer{}, err\n\t}\n\t_, err = tarWriter.Write(value)\n\tif err != nil {\n\t\treturn bytes.Buffer{}, err\n\t}\n\terr = tarWriter.Close()\n\treturn b, err\n}\n"
  },
  {
    "path": "pkg/compose/shellout.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli-plugins/metadata\"\n\t\"github.com/docker/cli/cli/command\"\n\t\"github.com/docker/cli/cli/flags\"\n\t\"github.com/moby/moby/client\"\n\t\"go.opentelemetry.io/otel\"\n\t\"go.opentelemetry.io/otel/propagation\"\n\n\t\"github.com/docker/compose/v5/internal\"\n)\n\n// prepareShellOut prepare a shell-out command to be ran by Compose\nfunc (s *composeService) prepareShellOut(gctx context.Context, env types.Mapping, cmd *exec.Cmd) error {\n\tenv = env.Clone()\n\t// remove DOCKER_CLI_PLUGIN... variable so a docker-cli plugin will detect it run standalone\n\tdelete(env, metadata.ReexecEnvvar)\n\n\t// propagate opentelemetry context to child process, see https://github.com/open-telemetry/oteps/blob/main/text/0258-env-context-baggage-carriers.md\n\tcarrier := propagation.MapCarrier{}\n\totel.GetTextMapPropagator().Inject(gctx, &carrier)\n\tenv.Merge(types.Mapping(carrier))\n\n\tcmd.Env = env.Values()\n\treturn nil\n}\n\n// propagateDockerEndpoint produces DOCKER_* env vars for a child CLI plugin to target the same docker endpoint\n// `cleanup` func MUST be called after child process completion to enforce removal of cert files\nfunc (s *composeService) propagateDockerEndpoint() ([]string, func(), error) {\n\tcleanup := func() {}\n\tenv := types.Mapping{}\n\n\tenv[command.EnvOverrideContext] = s.dockerCli.CurrentContext()\n\tenv[\"USER_AGENT\"] = \"compose/\" + internal.Version\n\n\tendpoint := s.dockerCli.DockerEndpoint()\n\tenv[client.EnvOverrideHost] = endpoint.Host\n\tif endpoint.TLSData != nil {\n\t\tcerts, err := os.MkdirTemp(\"\", \"compose\")\n\t\tif err != nil {\n\t\t\treturn nil, cleanup, err\n\t\t}\n\t\tcleanup = func() {\n\t\t\t_ = os.RemoveAll(certs)\n\t\t}\n\t\tenv[client.EnvOverrideCertPath] = certs\n\t\tenv[\"DOCKER_TLS\"] = \"1\"\n\t\tif !endpoint.SkipTLSVerify {\n\t\t\tenv[client.EnvTLSVerify] = \"1\"\n\t\t}\n\n\t\terr = os.WriteFile(filepath.Join(certs, flags.DefaultKeyFile), endpoint.TLSData.Key, 0o600)\n\t\tif err != nil {\n\t\t\treturn nil, cleanup, err\n\t\t}\n\t\terr = os.WriteFile(filepath.Join(certs, flags.DefaultCertFile), endpoint.TLSData.Cert, 0o600)\n\t\tif err != nil {\n\t\t\treturn nil, cleanup, err\n\t\t}\n\t\terr = os.WriteFile(filepath.Join(certs, flags.DefaultCaFile), endpoint.TLSData.CA, 0o600)\n\t\tif err != nil {\n\t\t\treturn nil, cleanup, err\n\t\t}\n\t}\n\treturn env.Values(), cleanup, nil\n}\n"
  },
  {
    "path": "pkg/compose/start.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/moby/moby/client\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Start(ctx context.Context, projectName string, options api.StartOptions) error {\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.start(ctx, strings.ToLower(projectName), options, nil)\n\t}, \"start\", s.events)\n}\n\nfunc (s *composeService) start(ctx context.Context, projectName string, options api.StartOptions, listener api.ContainerEventListener) error {\n\tproject := options.Project\n\tif project == nil {\n\t\tvar containers Containers\n\t\tcontainers, err := s.getContainers(ctx, projectName, oneOffExclude, true)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tproject, err = s.projectFromName(containers, projectName, options.AttachTo...)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tres, err := s.apiClient().ContainerList(ctx, client.ContainerListOptions{\n\t\tFilters: projectFilter(project.Name).Add(\"label\", oneOffFilter(false)),\n\t\tAll:     true,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\tcontainers := Containers(res.Items)\n\n\terr = InDependencyOrder(ctx, project, func(c context.Context, name string) error {\n\t\tservice, err := project.GetService(name)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn s.startService(ctx, project, service, containers, listener, options.WaitTimeout)\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif options.Wait {\n\t\tdepends := types.DependsOnConfig{}\n\t\tfor _, s := range project.Services {\n\t\t\tdepends[s.Name] = types.ServiceDependency{\n\t\t\t\tCondition: getDependencyCondition(s, project),\n\t\t\t\tRequired:  true,\n\t\t\t}\n\t\t}\n\t\tif options.WaitTimeout > 0 {\n\t\t\twithTimeout, cancel := context.WithTimeout(ctx, options.WaitTimeout)\n\t\t\tctx = withTimeout\n\t\t\tdefer cancel()\n\t\t}\n\n\t\terr = s.waitDependencies(ctx, project, project.Name, depends, containers, 0)\n\t\tif err != nil {\n\t\t\tif errors.Is(ctx.Err(), context.DeadlineExceeded) {\n\t\t\t\treturn fmt.Errorf(\"application not healthy after %s\", options.WaitTimeout)\n\t\t\t}\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// getDependencyCondition checks if service is depended on by other services\n// with service_completed_successfully condition, and applies that condition\n// instead, or --wait will never finish waiting for one-shot containers\nfunc getDependencyCondition(service types.ServiceConfig, project *types.Project) string {\n\tfor _, services := range project.Services {\n\t\tfor dependencyService, dependencyConfig := range services.DependsOn {\n\t\t\tif dependencyService == service.Name && dependencyConfig.Condition == types.ServiceConditionCompletedSuccessfully {\n\t\t\t\treturn types.ServiceConditionCompletedSuccessfully\n\t\t\t}\n\t\t}\n\t}\n\treturn ServiceConditionRunningOrHealthy\n}\n"
  },
  {
    "path": "pkg/compose/stop.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"slices\"\n\t\"strings\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Stop(ctx context.Context, projectName string, options api.StopOptions) error {\n\treturn Run(ctx, func(ctx context.Context) error {\n\t\treturn s.stop(ctx, strings.ToLower(projectName), options, nil)\n\t}, \"stop\", s.events)\n}\n\nfunc (s *composeService) stop(ctx context.Context, projectName string, options api.StopOptions, event api.ContainerEventListener) error {\n\tcontainers, err := s.getContainers(ctx, projectName, oneOffExclude, true)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tproject := options.Project\n\tif project == nil {\n\t\tproject, err = s.getProjectWithResources(ctx, containers, projectName)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif len(options.Services) == 0 {\n\t\toptions.Services = project.ServiceNames()\n\t}\n\n\treturn InReverseDependencyOrder(ctx, project, func(c context.Context, service string) error {\n\t\tif !slices.Contains(options.Services, service) {\n\t\t\treturn nil\n\t\t}\n\t\tserv := project.Services[service]\n\t\treturn s.stopContainers(ctx, &serv, containers.filter(isService(service)).filter(isNotOneOff), options.Timeout, event)\n\t})\n}\n"
  },
  {
    "path": "pkg/compose/stop_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"go.uber.org/mock/gomock\"\n\t\"gotest.tools/v3/assert\"\n\n\tcompose \"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\nfunc TestStopTimeout(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tapi, cli := prepareMocks(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\tassert.NilError(t, err)\n\n\tapi.EXPECT().ContainerList(gomock.Any(), projectFilterListOpt(false)).Return(\n\t\tclient.ContainerListResult{\n\t\t\tItems: []container.Summary{\n\t\t\t\ttestContainer(\"service1\", \"123\", false),\n\t\t\t\ttestContainer(\"service1\", \"456\", false),\n\t\t\t\ttestContainer(\"service2\", \"789\", false),\n\t\t\t},\n\t\t}, nil)\n\tapi.EXPECT().VolumeList(\n\t\tgomock.Any(),\n\t\tclient.VolumeListOptions{\n\t\t\tFilters: projectFilter(strings.ToLower(testProject)),\n\t\t}).\n\t\tReturn(client.VolumeListResult{}, nil)\n\tapi.EXPECT().NetworkList(gomock.Any(), client.NetworkListOptions{Filters: projectFilter(strings.ToLower(testProject))}).\n\t\tReturn(client.NetworkListResult{}, nil)\n\n\ttimeout := 2 * time.Second\n\tstopConfig := client.ContainerStopOptions{Timeout: utils.DurationSecondToInt(&timeout)}\n\tapi.EXPECT().ContainerStop(gomock.Any(), \"123\", stopConfig).Return(client.ContainerStopResult{}, nil)\n\tapi.EXPECT().ContainerStop(gomock.Any(), \"456\", stopConfig).Return(client.ContainerStopResult{}, nil)\n\tapi.EXPECT().ContainerStop(gomock.Any(), \"789\", stopConfig).Return(client.ContainerStopResult{}, nil)\n\n\terr = tested.Stop(t.Context(), strings.ToLower(testProject), compose.StopOptions{\n\t\tTimeout: &timeout,\n\t})\n\tassert.NilError(t, err)\n}\n"
  },
  {
    "path": "pkg/compose/suffix_unix.go",
    "content": "//go:build !windows\n\n/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nfunc executable(s string) string {\n\treturn s\n}\n"
  },
  {
    "path": "pkg/compose/testdata/compose.yaml",
    "content": "services:\n  service1:\n    image: nginx\n  service2:\n    image: mysql\n"
  },
  {
    "path": "pkg/compose/testdata/publish/common.yaml",
    "content": "services:\n  foo:\n    image: bar\n"
  },
  {
    "path": "pkg/compose/testdata/publish/compose.yaml",
    "content": "name: test\nservices:\n  test:\n    extends:\n      file: common.yaml\n      service: foo\n\n  string:\n    image: test\n    env_file: test.env\n\n  list:\n    image: test\n    env_file:\n      - test.env\n\n  mapping:\n    image: test\n    env_file:\n      - path: test.env\n"
  },
  {
    "path": "pkg/compose/testdata/publish/test.env",
    "content": "HELLO=WORLD"
  },
  {
    "path": "pkg/compose/top.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"strings\"\n\n\t\"github.com/moby/moby/client\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Top(ctx context.Context, projectName string, services []string) ([]api.ContainerProcSummary, error) {\n\tprojectName = strings.ToLower(projectName)\n\tvar containers Containers\n\tcontainers, err := s.getContainers(ctx, projectName, oneOffInclude, false)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif len(services) > 0 {\n\t\tcontainers = containers.filter(isService(services...))\n\t}\n\tsummary := make([]api.ContainerProcSummary, len(containers))\n\teg, ctx := errgroup.WithContext(ctx)\n\tfor i, ctr := range containers {\n\t\teg.Go(func() error {\n\t\t\ttopContent, err := s.apiClient().ContainerTop(ctx, ctr.ID, client.ContainerTopOptions{\n\t\t\t\tArguments: []string{},\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tname := getCanonicalContainerName(ctr)\n\t\t\ts := api.ContainerProcSummary{\n\t\t\t\tID:        ctr.ID,\n\t\t\t\tName:      name,\n\t\t\t\tProcesses: topContent.Processes,\n\t\t\t\tTitles:    topContent.Titles,\n\t\t\t\tService:   name,\n\t\t\t}\n\t\t\tif service, exists := ctr.Labels[api.ServiceLabel]; exists {\n\t\t\t\ts.Service = service\n\t\t\t}\n\t\t\tif replica, exists := ctr.Labels[api.ContainerNumberLabel]; exists {\n\t\t\t\ts.Replica = replica\n\t\t\t}\n\t\t\tsummary[i] = s\n\t\t\treturn nil\n\t\t})\n\t}\n\treturn summary, eg.Wait()\n}\n"
  },
  {
    "path": "pkg/compose/transform/replace.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage transform\n\nimport (\n\t\"fmt\"\n\n\t\"go.yaml.in/yaml/v4\"\n)\n\n// ReplaceExtendsFile changes value for service.extends.file in input yaml stream, preserving formatting\nfunc ReplaceExtendsFile(in []byte, service string, value string) ([]byte, error) {\n\tvar doc yaml.Node\n\terr := yaml.Unmarshal(in, &doc)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif doc.Kind != yaml.DocumentNode {\n\t\treturn nil, fmt.Errorf(\"expected document kind %v, got %v\", yaml.DocumentNode, doc.Kind)\n\t}\n\troot := doc.Content[0]\n\tif root.Kind != yaml.MappingNode {\n\t\treturn nil, fmt.Errorf(\"expected document root to be a mapping, got %v\", root.Kind)\n\t}\n\n\tservices, err := getMapping(root, \"services\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\ttarget, err := getMapping(services, service)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\textends, err := getMapping(target, \"extends\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tfile, err := getMapping(extends, \"file\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// we've found target `file` yaml node. Let's replace value in stream at node position\n\treturn replace(in, file.Line, file.Column, value), nil\n}\n\n// ReplaceEnvFile changes value for service.extends.env_file in input yaml stream, preserving formatting\nfunc ReplaceEnvFile(in []byte, service string, i int, value string) ([]byte, error) {\n\tvar doc yaml.Node\n\terr := yaml.Unmarshal(in, &doc)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif doc.Kind != yaml.DocumentNode {\n\t\treturn nil, fmt.Errorf(\"expected document kind %v, got %v\", yaml.DocumentNode, doc.Kind)\n\t}\n\troot := doc.Content[0]\n\tif root.Kind != yaml.MappingNode {\n\t\treturn nil, fmt.Errorf(\"expected document root to be a mapping, got %v\", root.Kind)\n\t}\n\n\tservices, err := getMapping(root, \"services\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\ttarget, err := getMapping(services, service)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tenvFile, err := getMapping(target, \"env_file\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// env_file can be either a string, sequence of strings, or sequence of mappings with path attribute\n\tif envFile.Kind == yaml.SequenceNode {\n\t\tenvFile = envFile.Content[i]\n\t\tif envFile.Kind == yaml.MappingNode {\n\t\t\tenvFile, err = getMapping(envFile, \"path\")\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t\treturn replace(in, envFile.Line, envFile.Column, value), nil\n\t} else {\n\t\treturn replace(in, envFile.Line, envFile.Column, value), nil\n\t}\n}\n\nfunc getMapping(root *yaml.Node, key string) (*yaml.Node, error) {\n\tvar node *yaml.Node\n\tl := len(root.Content)\n\tfor i := 0; i < l; i += 2 {\n\t\tk := root.Content[i]\n\t\tif k.Kind != yaml.ScalarNode || k.Tag != \"!!str\" {\n\t\t\treturn nil, fmt.Errorf(\"expected mapping key to be a string, got %v %v\", root.Kind, k.Tag)\n\t\t}\n\t\tif k.Value == key {\n\t\t\tnode = root.Content[i+1]\n\t\t\treturn node, nil\n\t\t}\n\t}\n\treturn nil, fmt.Errorf(\"key %v not found\", key)\n}\n\n// replace changes yaml node value in stream at position, preserving content\nfunc replace(in []byte, line int, column int, value string) []byte {\n\tvar out []byte\n\tl := 1\n\tpos := 0\n\tfor _, b := range in {\n\t\tif b == '\\n' {\n\t\t\tl++\n\t\t\tif l == line {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tpos++\n\t}\n\tpos += column\n\tout = append(out, in[0:pos]...)\n\tout = append(out, []byte(value)...)\n\tfor ; pos < len(in); pos++ {\n\t\tif in[pos] == '\\n' {\n\t\t\tbreak\n\t\t}\n\t}\n\tout = append(out, in[pos:]...)\n\treturn out\n}\n"
  },
  {
    "path": "pkg/compose/transform/replace_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage transform\n\nimport (\n\t\"reflect\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc TestReplace(t *testing.T) {\n\ttests := []struct {\n\t\tname string\n\t\tin   string\n\t\twant string\n\t}{\n\t\t{\n\t\t\tname: \"simple\",\n\t\t\tin: `services:\n  test:\n    extends:\n      file: foo.yaml\n      service: foo\n`,\n\t\t\twant: `services:\n  test:\n    extends:\n      file: REPLACED\n      service: foo\n`,\n\t\t},\n\t\t{\n\t\t\tname: \"last line\",\n\t\t\tin: `services:\n  test:\n    extends:\n      service: foo\n      file: foo.yaml\n`,\n\t\t\twant: `services:\n  test:\n    extends:\n      service: foo\n      file: REPLACED\n`,\n\t\t},\n\t\t{\n\t\t\tname: \"last line no CR\",\n\t\t\tin: `services:\n  test:\n    extends:\n      service: foo\n      file: foo.yaml`,\n\t\t\twant: `services:\n  test:\n    extends:\n      service: foo\n      file: REPLACED`,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot, err := ReplaceExtendsFile([]byte(tt.in), \"test\", \"REPLACED\")\n\t\t\tassert.NilError(t, err)\n\t\t\tif !reflect.DeepEqual(got, []byte(tt.want)) {\n\t\t\t\tt.Errorf(\"ReplaceExtendsFile() got = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "pkg/compose/up.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/signal\"\n\t\"slices\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"syscall\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/containerd/errdefs\"\n\t\"github.com/docker/cli/cli\"\n\t\"github.com/eiannone/keyboard\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/sirupsen/logrus\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/cmd/formatter\"\n\t\"github.com/docker/compose/v5/internal/tracing\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Up(ctx context.Context, project *types.Project, options api.UpOptions) error { //nolint:gocyclo\n\terr := Run(ctx, tracing.SpanWrapFunc(\"project/up\", tracing.ProjectOptions(ctx, project), func(ctx context.Context) error {\n\t\terr := s.create(ctx, project, options.Create)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif options.Start.Attach == nil {\n\t\t\treturn s.start(ctx, project.Name, options.Start, nil)\n\t\t}\n\t\treturn nil\n\t}), \"up\", s.events)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif options.Start.Attach == nil {\n\t\treturn err\n\t}\n\tif s.dryRun {\n\t\t_, _ = fmt.Fprintln(s.stdout(), \"end of 'compose up' output, interactive run is not supported in dry-run mode\")\n\t\treturn err\n\t}\n\n\t// if we get a second signal during shutdown, we kill the services\n\t// immediately, so the channel needs to have sufficient capacity or\n\t// we might miss a signal while setting up the second channel read\n\t// (this is also why signal.Notify is used vs signal.NotifyContext)\n\tsignalChan := make(chan os.Signal, 2)\n\tsignal.Notify(signalChan, syscall.SIGINT, syscall.SIGTERM)\n\tdefer signal.Stop(signalChan)\n\tvar isTerminated atomic.Bool\n\n\tvar (\n\t\tlogConsumer    = options.Start.Attach\n\t\tnavigationMenu *formatter.LogKeyboard\n\t\tkEvents        <-chan keyboard.KeyEvent\n\t)\n\tif options.Start.NavigationMenu {\n\t\tkEvents, err = keyboard.GetKeys(100)\n\t\tif err != nil {\n\t\t\tlogrus.Warnf(\"could not start menu, an error occurred while starting: %v\", err)\n\t\t\toptions.Start.NavigationMenu = false\n\t\t} else {\n\t\t\tdefer keyboard.Close() //nolint:errcheck\n\t\t\tisDockerDesktopActive, err := s.isDesktopIntegrationActive(ctx)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\ttracing.KeyboardMetrics(ctx, options.Start.NavigationMenu, isDockerDesktopActive)\n\t\t\tnavigationMenu = formatter.NewKeyboardManager(isDockerDesktopActive, signalChan)\n\t\t\tlogConsumer = navigationMenu.Decorate(logConsumer)\n\t\t}\n\t}\n\n\twatcher, err := NewWatcher(project, options, s.watch, logConsumer)\n\tif err != nil && options.Start.Watch {\n\t\treturn err\n\t}\n\n\tif navigationMenu != nil && watcher != nil {\n\t\tnavigationMenu.EnableWatch(options.Start.Watch, watcher)\n\t}\n\n\tprinter := newLogPrinter(logConsumer)\n\n\t// global context to handle canceling goroutines\n\tglobalCtx, cancel := context.WithCancel(ctx)\n\tdefer cancel()\n\n\tif navigationMenu != nil {\n\t\tnavigationMenu.EnableDetach(cancel)\n\t}\n\n\tvar (\n\t\teg   errgroup.Group\n\t\tmu   sync.Mutex\n\t\terrs []error\n\t)\n\n\tappendErr := func(err error) {\n\t\tif err != nil {\n\t\t\tmu.Lock()\n\t\t\terrs = append(errs, err)\n\t\t\tmu.Unlock()\n\t\t}\n\t}\n\n\teg.Go(func() error {\n\t\tfirst := true\n\t\tgracefulTeardown := func() {\n\t\t\tfirst = false\n\t\t\ts.events.On(newEvent(api.ResourceCompose, api.Working, api.StatusStopping, \"Gracefully Stopping... press Ctrl+C again to force\"))\n\t\t\teg.Go(func() error {\n\t\t\t\terr = s.stop(context.WithoutCancel(globalCtx), project.Name, api.StopOptions{\n\t\t\t\t\tServices: options.Create.Services,\n\t\t\t\t\tProject:  project,\n\t\t\t\t}, printer.HandleEvent)\n\t\t\t\tappendErr(err)\n\t\t\t\treturn nil\n\t\t\t})\n\t\t\tisTerminated.Store(true)\n\t\t}\n\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-globalCtx.Done():\n\t\t\t\tif watcher != nil {\n\t\t\t\t\treturn watcher.Stop()\n\t\t\t\t}\n\t\t\t\treturn nil\n\t\t\tcase <-ctx.Done():\n\t\t\t\tif first {\n\t\t\t\t\tgracefulTeardown()\n\t\t\t\t}\n\t\t\tcase <-signalChan:\n\t\t\t\tif first {\n\t\t\t\t\t_ = keyboard.Close()\n\t\t\t\t\tgracefulTeardown()\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\teg.Go(func() error {\n\t\t\t\t\terr := s.kill(context.WithoutCancel(globalCtx), project.Name, api.KillOptions{\n\t\t\t\t\t\tServices: options.Create.Services,\n\t\t\t\t\t\tProject:  project,\n\t\t\t\t\t\tAll:      true,\n\t\t\t\t\t})\n\t\t\t\t\t// Ignore errors indicating that some of the containers were already stopped or removed.\n\t\t\t\t\tif errdefs.IsNotFound(err) || errdefs.IsConflict(err) || errors.Is(err, api.ErrNoResources) {\n\t\t\t\t\t\treturn nil\n\t\t\t\t\t}\n\n\t\t\t\t\tappendErr(err)\n\t\t\t\t\treturn nil\n\t\t\t\t})\n\t\t\t\treturn nil\n\t\t\tcase event := <-kEvents:\n\t\t\t\tnavigationMenu.HandleKeyEvents(globalCtx, event, project, options)\n\t\t\t}\n\t\t}\n\t})\n\n\tif options.Start.Watch && watcher != nil {\n\t\tif err := watcher.Start(globalCtx); err != nil {\n\t\t\t// cancel the global context to terminate background goroutines\n\t\t\tcancel()\n\t\t\t_ = eg.Wait()\n\t\t\treturn err\n\t\t}\n\t}\n\n\tmonitor := newMonitor(s.apiClient(), project.Name)\n\tif len(options.Start.Services) > 0 {\n\t\tmonitor.withServices(options.Start.Services)\n\t} else {\n\t\t// Start.AttachTo have been already curated with only the services to monitor\n\t\tmonitor.withServices(options.Start.AttachTo)\n\t}\n\tmonitor.withListener(printer.HandleEvent)\n\n\tvar exitCode int\n\tif options.Start.OnExit != api.CascadeIgnore {\n\t\tonce := true\n\t\t// detect first container to exit to trigger application shutdown\n\t\tmonitor.withListener(func(event api.ContainerEvent) {\n\t\t\tif once && event.Type == api.ContainerEventExited {\n\t\t\t\tif options.Start.OnExit == api.CascadeFail && event.ExitCode == 0 {\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tonce = false\n\t\t\t\texitCode = event.ExitCode\n\t\t\t\ts.events.On(newEvent(api.ResourceCompose, api.Working, api.StatusStopping, \"Aborting on container exit...\"))\n\t\t\t\teg.Go(func() error {\n\t\t\t\t\terr = s.stop(context.WithoutCancel(globalCtx), project.Name, api.StopOptions{\n\t\t\t\t\t\tServices: options.Create.Services,\n\t\t\t\t\t\tProject:  project,\n\t\t\t\t\t}, printer.HandleEvent)\n\t\t\t\t\tappendErr(err)\n\t\t\t\t\treturn nil\n\t\t\t\t})\n\t\t\t}\n\t\t})\n\t}\n\n\tif options.Start.ExitCodeFrom != \"\" {\n\t\tonce := true\n\t\t// capture exit code from first container to exit with selected service\n\t\tmonitor.withListener(func(event api.ContainerEvent) {\n\t\t\tif once && event.Type == api.ContainerEventExited && event.Service == options.Start.ExitCodeFrom {\n\t\t\t\texitCode = event.ExitCode\n\t\t\t\tonce = false\n\t\t\t}\n\t\t})\n\t}\n\n\tcontainers, err := s.attach(globalCtx, project, printer.HandleEvent, options.Start.AttachTo)\n\tif err != nil {\n\t\tcancel()\n\t\t_ = eg.Wait()\n\t\treturn err\n\t}\n\tattached := make([]string, len(containers))\n\tfor i, ctr := range containers {\n\t\tattached[i] = ctr.ID\n\t}\n\n\tmonitor.withListener(func(event api.ContainerEvent) {\n\t\tif event.Type != api.ContainerEventStarted {\n\t\t\treturn\n\t\t}\n\t\tif slices.Contains(attached, event.ID) && !event.Restarting {\n\t\t\treturn\n\t\t}\n\t\teg.Go(func() error {\n\t\t\tres, err := s.apiClient().ContainerInspect(globalCtx, event.ID, client.ContainerInspectOptions{})\n\t\t\tif err != nil {\n\t\t\t\tappendErr(err)\n\t\t\t\treturn nil\n\t\t\t}\n\n\t\t\terr = s.doLogContainer(globalCtx, options.Start.Attach, event.Source, res.Container, api.LogOptions{\n\t\t\t\tFollow: true,\n\t\t\t\tSince:  res.Container.State.StartedAt,\n\t\t\t})\n\t\t\tif errdefs.IsNotImplemented(err) {\n\t\t\t\t// container may be configured with logging_driver: none\n\t\t\t\t// as container already started, we might miss the very first logs. But still better than none\n\t\t\t\terr := s.doAttachContainer(globalCtx, event.Service, event.ID, event.Source, printer.HandleEvent)\n\t\t\t\tappendErr(err)\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tappendErr(err)\n\t\t\treturn nil\n\t\t})\n\t})\n\n\teg.Go(func() error {\n\t\terr := monitor.Start(globalCtx)\n\t\t// cancel the global context to terminate signal-handler goroutines\n\t\tcancel()\n\t\tappendErr(err)\n\t\treturn nil\n\t})\n\n\t// We use the parent context without cancellation as we manage sigterm to stop the stack\n\terr = s.start(context.WithoutCancel(ctx), project.Name, options.Start, printer.HandleEvent)\n\tif err != nil && !isTerminated.Load() { // Ignore error if the process is terminated\n\t\tcancel()\n\t\t_ = eg.Wait()\n\t\treturn err\n\t}\n\n\t_ = eg.Wait()\n\terr = errors.Join(errs...)\n\tif exitCode != 0 {\n\t\terrMsg := \"\"\n\t\tif err != nil {\n\t\t\terrMsg = err.Error()\n\t\t}\n\t\treturn cli.StatusError{StatusCode: exitCode, Status: errMsg}\n\t}\n\treturn err\n}\n"
  },
  {
    "path": "pkg/compose/viz.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\n// maps a service with the services it depends on\ntype vizGraph map[*types.ServiceConfig][]*types.ServiceConfig\n\nfunc (s *composeService) Viz(_ context.Context, project *types.Project, opts api.VizOptions) (string, error) {\n\tgraph := make(vizGraph)\n\tfor _, service := range project.Services {\n\t\tgraph[&service] = make([]*types.ServiceConfig, 0, len(service.DependsOn))\n\t\tfor dependencyName := range service.DependsOn {\n\t\t\t// no error should be returned since dependencyName should exist\n\t\t\tdependency, _ := project.GetService(dependencyName)\n\t\t\tgraph[&service] = append(graph[&service], &dependency)\n\t\t}\n\t}\n\n\t// build graphviz graph\n\tvar graphBuilder strings.Builder\n\n\t// graph name\n\tgraphBuilder.WriteString(\"digraph \")\n\twriteQuoted(&graphBuilder, project.Name)\n\tgraphBuilder.WriteString(\" {\\n\")\n\n\t// graph layout\n\t// dot is the perfect layout for this use case since graph is directed and hierarchical\n\tgraphBuilder.WriteString(opts.Indentation + \"layout=dot;\\n\")\n\n\taddNodes(&graphBuilder, graph, project.Name, &opts)\n\tgraphBuilder.WriteByte('\\n')\n\n\taddEdges(&graphBuilder, graph, &opts)\n\tgraphBuilder.WriteString(\"}\\n\")\n\n\treturn graphBuilder.String(), nil\n}\n\n// addNodes adds the corresponding graphviz representation of all the nodes in the given graph to the graphBuilder\n// returns the same graphBuilder\nfunc addNodes(graphBuilder *strings.Builder, graph vizGraph, projectName string, opts *api.VizOptions) *strings.Builder {\n\tfor serviceNode := range graph {\n\t\t// write:\n\t\t// \"service name\" [style=\"filled\" label<<font point-size=\"15\">service name</font>\n\t\tgraphBuilder.WriteString(opts.Indentation)\n\t\twriteQuoted(graphBuilder, serviceNode.Name)\n\t\tgraphBuilder.WriteString(\" [style=\\\"filled\\\" label=<<font point-size=\\\"15\\\">\")\n\t\tgraphBuilder.WriteString(serviceNode.Name)\n\t\tgraphBuilder.WriteString(\"</font>\")\n\n\t\tif opts.IncludeNetworks && len(serviceNode.Networks) > 0 {\n\t\t\tgraphBuilder.WriteString(\"<font point-size=\\\"10\\\">\")\n\t\t\tgraphBuilder.WriteString(\"<br/><br/><b>Networks:</b>\")\n\t\t\tfor _, networkName := range serviceNode.NetworksByPriority() {\n\t\t\t\tgraphBuilder.WriteString(\"<br/>\")\n\t\t\t\tgraphBuilder.WriteString(networkName)\n\t\t\t}\n\t\t\tgraphBuilder.WriteString(\"</font>\")\n\t\t}\n\n\t\tif opts.IncludePorts && len(serviceNode.Ports) > 0 {\n\t\t\tgraphBuilder.WriteString(\"<font point-size=\\\"10\\\">\")\n\t\t\tgraphBuilder.WriteString(\"<br/><br/><b>Ports:</b>\")\n\t\t\tfor _, portConfig := range serviceNode.Ports {\n\t\t\t\tgraphBuilder.WriteString(\"<br/>\")\n\t\t\t\tif portConfig.HostIP != \"\" {\n\t\t\t\t\tgraphBuilder.WriteString(portConfig.HostIP)\n\t\t\t\t\tgraphBuilder.WriteByte(':')\n\t\t\t\t}\n\t\t\t\tgraphBuilder.WriteString(portConfig.Published)\n\t\t\t\tgraphBuilder.WriteByte(':')\n\t\t\t\tgraphBuilder.WriteString(strconv.Itoa(int(portConfig.Target)))\n\t\t\t\tgraphBuilder.WriteString(\" (\")\n\t\t\t\tgraphBuilder.WriteString(portConfig.Protocol)\n\t\t\t\tgraphBuilder.WriteString(\", \")\n\t\t\t\tgraphBuilder.WriteString(portConfig.Mode)\n\t\t\t\tgraphBuilder.WriteString(\")\")\n\t\t\t}\n\t\t\tgraphBuilder.WriteString(\"</font>\")\n\t\t}\n\n\t\tif opts.IncludeImageName {\n\t\t\tgraphBuilder.WriteString(\"<font point-size=\\\"10\\\">\")\n\t\t\tgraphBuilder.WriteString(\"<br/><br/><b>Image:</b><br/>\")\n\t\t\tgraphBuilder.WriteString(api.GetImageNameOrDefault(*serviceNode, projectName))\n\t\t\tgraphBuilder.WriteString(\"</font>\")\n\t\t}\n\n\t\tgraphBuilder.WriteString(\">];\\n\")\n\t}\n\n\treturn graphBuilder\n}\n\n// addEdges adds the corresponding graphviz representation of all edges in the given graph to the graphBuilder\n// returns the same graphBuilder\nfunc addEdges(graphBuilder *strings.Builder, graph vizGraph, opts *api.VizOptions) *strings.Builder {\n\tfor parent, children := range graph {\n\t\tfor _, child := range children {\n\t\t\tgraphBuilder.WriteString(opts.Indentation)\n\t\t\twriteQuoted(graphBuilder, parent.Name)\n\t\t\tgraphBuilder.WriteString(\" -> \")\n\t\t\twriteQuoted(graphBuilder, child.Name)\n\t\t\tgraphBuilder.WriteString(\";\\n\")\n\t\t}\n\t}\n\n\treturn graphBuilder\n}\n\n// writeQuoted writes \"str\" to builder\nfunc writeQuoted(builder *strings.Builder, str string) {\n\tbuilder.WriteByte('\"')\n\tbuilder.WriteString(str)\n\tbuilder.WriteByte('\"')\n}\n"
  },
  {
    "path": "pkg/compose/viz_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"strconv\"\n\t\"testing\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/require\"\n\t\"go.uber.org/mock/gomock\"\n\n\tcompose \"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/mocks\"\n)\n\nfunc TestViz(t *testing.T) {\n\tproject := types.Project{\n\t\tName:       \"viz-test\",\n\t\tWorkingDir: \"/home\",\n\t\tServices: types.Services{\n\t\t\t\"service1\": {\n\t\t\t\tName:  \"service1\",\n\t\t\t\tImage: \"image-for-service1\",\n\t\t\t\tPorts: []types.ServicePortConfig{\n\t\t\t\t\t{\n\t\t\t\t\t\tPublished: \"80\",\n\t\t\t\t\t\tTarget:    80,\n\t\t\t\t\t\tProtocol:  \"tcp\",\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tPublished: \"53\",\n\t\t\t\t\t\tTarget:    533,\n\t\t\t\t\t\tProtocol:  \"udp\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tNetworks: map[string]*types.ServiceNetworkConfig{\n\t\t\t\t\t\"internal\": nil,\n\t\t\t\t},\n\t\t\t},\n\t\t\t\"service2\": {\n\t\t\t\tName:  \"service2\",\n\t\t\t\tImage: \"image-for-service2\",\n\t\t\t\tPorts: []types.ServicePortConfig{},\n\t\t\t},\n\t\t\t\"service3\": {\n\t\t\t\tName:  \"service3\",\n\t\t\t\tImage: \"some-image\",\n\t\t\t\tDependsOn: map[string]types.ServiceDependency{\n\t\t\t\t\t\"service2\": {},\n\t\t\t\t\t\"service1\": {},\n\t\t\t\t},\n\t\t\t},\n\t\t\t\"service4\": {\n\t\t\t\tName:  \"service4\",\n\t\t\t\tImage: \"another-image\",\n\t\t\t\tDependsOn: map[string]types.ServiceDependency{\n\t\t\t\t\t\"service3\": {},\n\t\t\t\t},\n\t\t\t\tPorts: []types.ServicePortConfig{\n\t\t\t\t\t{\n\t\t\t\t\t\tPublished: \"8080\",\n\t\t\t\t\t\tTarget:    80,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tNetworks: map[string]*types.ServiceNetworkConfig{\n\t\t\t\t\t\"external\": nil,\n\t\t\t\t},\n\t\t\t},\n\t\t\t\"With host IP\": {\n\t\t\t\tName:  \"With host IP\",\n\t\t\t\tImage: \"user/image-name\",\n\t\t\t\tDependsOn: map[string]types.ServiceDependency{\n\t\t\t\t\t\"service1\": {},\n\t\t\t\t},\n\t\t\t\tPorts: []types.ServicePortConfig{\n\t\t\t\t\t{\n\t\t\t\t\t\tPublished: \"8888\",\n\t\t\t\t\t\tTarget:    8080,\n\t\t\t\t\t\tHostIP:    \"127.0.0.1\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\tNetworks: types.Networks{\n\t\t\t\"internal\": types.NetworkConfig{},\n\t\t\t\"external\": types.NetworkConfig{},\n\t\t\t\"not-used\": types.NetworkConfig{},\n\t\t},\n\t\tVolumes:          nil,\n\t\tSecrets:          nil,\n\t\tConfigs:          nil,\n\t\tExtensions:       nil,\n\t\tComposeFiles:     nil,\n\t\tEnvironment:      nil,\n\t\tDisabledServices: nil,\n\t\tProfiles:         nil,\n\t}\n\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\tcli := mocks.NewMockCli(mockCtrl)\n\ttested, err := NewComposeService(cli)\n\trequire.NoError(t, err)\n\n\tt.Run(\"viz (no ports, networks or image)\", func(t *testing.T) {\n\t\tgraphStr, err := tested.Viz(t.Context(), &project, compose.VizOptions{\n\t\t\tIndentation:      \"  \",\n\t\t\tIncludePorts:     false,\n\t\t\tIncludeImageName: false,\n\t\t\tIncludeNetworks:  false,\n\t\t})\n\t\trequire.NoError(t, err, \"viz command failed\")\n\n\t\t// check indentation\n\t\tassert.Contains(t, graphStr, \"\\n  \", graphStr)\n\t\tassert.NotContains(t, graphStr, \"\\n   \", graphStr)\n\n\t\t// check digraph name\n\t\tassert.Contains(t, graphStr, \"digraph \\\"\"+project.Name+\"\\\"\", graphStr)\n\n\t\t// check nodes\n\t\tfor _, service := range project.Services {\n\t\t\tassert.Contains(t, graphStr, \"\\\"\"+service.Name+\"\\\" [style=\\\"filled\\\"\", graphStr)\n\t\t}\n\n\t\t// check node attributes\n\t\tassert.NotContains(t, graphStr, \"Networks\", graphStr)\n\t\tassert.NotContains(t, graphStr, \"Image\", graphStr)\n\t\tassert.NotContains(t, graphStr, \"Ports\", graphStr)\n\n\t\t// check edges that SHOULD exist in the generated graph\n\t\tallowedEdges := make(map[string][]string)\n\t\tfor name, service := range project.Services {\n\t\t\tallowed := make([]string, 0, len(service.DependsOn))\n\t\t\tfor depName := range service.DependsOn {\n\t\t\t\tallowed = append(allowed, depName)\n\t\t\t}\n\t\t\tallowedEdges[name] = allowed\n\t\t}\n\t\tfor serviceName, dependencies := range allowedEdges {\n\t\t\tfor _, dependencyName := range dependencies {\n\t\t\t\tassert.Contains(t, graphStr, \"\\\"\"+serviceName+\"\\\" -> \\\"\"+dependencyName+\"\\\"\", graphStr)\n\t\t\t}\n\t\t}\n\n\t\t// check edges that SHOULD NOT exist in the generated graph\n\t\tforbiddenEdges := make(map[string][]string)\n\t\tfor name, service := range project.Services {\n\t\t\tforbiddenEdges[name] = make([]string, 0, len(project.ServiceNames())-len(service.DependsOn))\n\t\t\tfor _, serviceName := range project.ServiceNames() {\n\t\t\t\t_, edgeExists := service.DependsOn[serviceName]\n\t\t\t\tif !edgeExists {\n\t\t\t\t\tforbiddenEdges[name] = append(forbiddenEdges[name], serviceName)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tfor serviceName, forbiddenDeps := range forbiddenEdges {\n\t\t\tfor _, forbiddenDep := range forbiddenDeps {\n\t\t\t\tassert.NotContains(t, graphStr, \"\\\"\"+serviceName+\"\\\" -> \\\"\"+forbiddenDep+\"\\\"\")\n\t\t\t}\n\t\t}\n\t})\n\n\tt.Run(\"viz (with ports, networks and image)\", func(t *testing.T) {\n\t\tgraphStr, err := tested.Viz(t.Context(), &project, compose.VizOptions{\n\t\t\tIndentation:      \"\\t\",\n\t\t\tIncludePorts:     true,\n\t\t\tIncludeImageName: true,\n\t\t\tIncludeNetworks:  true,\n\t\t})\n\t\trequire.NoError(t, err, \"viz command failed\")\n\n\t\t// check indentation\n\t\tassert.Contains(t, graphStr, \"\\n\\t\", graphStr)\n\t\tassert.NotContains(t, graphStr, \"\\n\\t\\t\", graphStr)\n\n\t\t// check digraph name\n\t\tassert.Contains(t, graphStr, \"digraph \\\"\"+project.Name+\"\\\"\", graphStr)\n\n\t\t// check nodes\n\t\tfor _, service := range project.Services {\n\t\t\tassert.Contains(t, graphStr, \"\\\"\"+service.Name+\"\\\" [style=\\\"filled\\\"\", graphStr)\n\t\t}\n\n\t\t// check node attributes\n\t\tassert.Contains(t, graphStr, \"Networks\", graphStr)\n\t\tassert.Contains(t, graphStr, \">internal<\", graphStr)\n\t\tassert.Contains(t, graphStr, \">external<\", graphStr)\n\t\tassert.Contains(t, graphStr, \"Image\", graphStr)\n\t\tfor _, service := range project.Services {\n\t\t\tassert.Contains(t, graphStr, \">\"+service.Image+\"<\", graphStr)\n\t\t}\n\t\tassert.Contains(t, graphStr, \"Ports\", graphStr)\n\t\tfor _, service := range project.Services {\n\t\t\tfor _, portConfig := range service.Ports {\n\t\t\t\tassert.NotContains(t, graphStr, \">\"+portConfig.Published+\":\"+strconv.Itoa(int(portConfig.Target))+\"<\", graphStr)\n\t\t\t}\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "pkg/compose/volumes.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"slices\"\n\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Volumes(ctx context.Context, project string, options api.VolumesOptions) ([]api.VolumesSummary, error) {\n\tallContainers, err := s.apiClient().ContainerList(ctx, client.ContainerListOptions{\n\t\tFilters: projectFilter(project),\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar containers []container.Summary\n\n\tif len(options.Services) > 0 {\n\t\t// filter service containers\n\t\tfor _, c := range allContainers.Items {\n\t\t\tif slices.Contains(options.Services, c.Labels[api.ServiceLabel]) {\n\t\t\t\tcontainers = append(containers, c)\n\t\t\t}\n\t\t}\n\t} else {\n\t\tcontainers = allContainers.Items\n\t}\n\n\tvolumesResponse, err := s.apiClient().VolumeList(ctx, client.VolumeListOptions{\n\t\tFilters: projectFilter(project),\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tprojectVolumes := volumesResponse.Items\n\n\tif len(options.Services) == 0 {\n\t\treturn projectVolumes, nil\n\t}\n\n\tvar volumes []api.VolumesSummary\n\n\t// create a name lookup of volumes used by containers\n\tserviceVolumes := make(map[string]bool)\n\n\tfor _, ctr := range containers {\n\t\tfor _, mount := range ctr.Mounts {\n\t\t\tserviceVolumes[mount.Name] = true\n\t\t}\n\t}\n\n\t// append if volumes in this project are in serviceVolumes\n\tfor _, v := range projectVolumes {\n\t\tif serviceVolumes[v.Name] {\n\t\t\tvolumes = append(volumes, v)\n\t\t}\n\t}\n\n\treturn volumes, nil\n}\n"
  },
  {
    "path": "pkg/compose/volumes_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"testing\"\n\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/volume\"\n\t\"github.com/moby/moby/client\"\n\t\"go.uber.org/mock/gomock\"\n\t\"gotest.tools/v3/assert\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc TestVolumes(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tdefer mockCtrl.Finish()\n\n\tmockApi, mockCli := prepareMocks(mockCtrl)\n\ttested := composeService{\n\t\tdockerCli: mockCli,\n\t}\n\n\t// Create test volumes\n\tvol1 := volume.Volume{Name: testProject + \"_vol1\"}\n\tvol2 := volume.Volume{Name: testProject + \"_vol2\"}\n\tvol3 := volume.Volume{Name: testProject + \"_vol3\"}\n\n\t// Create test containers with volume mounts\n\tc1 := container.Summary{\n\t\tLabels: map[string]string{api.ServiceLabel: \"service1\"},\n\t\tMounts: []container.MountPoint{\n\t\t\t{Name: testProject + \"_vol1\"},\n\t\t\t{Name: testProject + \"_vol2\"},\n\t\t},\n\t}\n\tc2 := container.Summary{\n\t\tLabels: map[string]string{api.ServiceLabel: \"service2\"},\n\t\tMounts: []container.MountPoint{\n\t\t\t{Name: testProject + \"_vol3\"},\n\t\t},\n\t}\n\n\tlistOpts := client.ContainerListOptions{Filters: projectFilter(testProject)}\n\tvolumeListOpts := client.VolumeListOptions{Filters: projectFilter(testProject)}\n\tvolumeReturn := client.VolumeListResult{\n\t\tItems: []volume.Volume{vol1, vol2, vol3},\n\t}\n\tcontainerReturn := client.ContainerListResult{\n\t\tItems: []container.Summary{c1, c2},\n\t}\n\n\tmockApi.EXPECT().ContainerList(t.Context(), listOpts).Times(2).Return(containerReturn, nil)\n\tmockApi.EXPECT().VolumeList(t.Context(), volumeListOpts).Times(2).Return(volumeReturn, nil)\n\n\t// Test without service filter - should return all project volumes\n\tvolumes, err := tested.Volumes(t.Context(), testProject, api.VolumesOptions{})\n\texpected := []api.VolumesSummary{vol1, vol2, vol3}\n\tassert.NilError(t, err)\n\tassert.DeepEqual(t, volumes, expected)\n\n\t// Test with service filter - should only return volumes used by service1\n\tvolumes, err = tested.Volumes(t.Context(), testProject, api.VolumesOptions{Services: []string{\"service1\"}})\n\texpected = []api.VolumesSummary{vol1, vol2}\n\tassert.NilError(t, err)\n\tassert.DeepEqual(t, volumes, expected)\n}\n"
  },
  {
    "path": "pkg/compose/wait.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/moby/moby/client\"\n\t\"golang.org/x/sync/errgroup\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc (s *composeService) Wait(ctx context.Context, projectName string, options api.WaitOptions) (int64, error) {\n\tcontainers, err := s.getContainers(ctx, projectName, oneOffInclude, false, options.Services...)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\tif len(containers) == 0 {\n\t\treturn 0, fmt.Errorf(\"no containers for project %q\", projectName)\n\t}\n\n\teg, waitCtx := errgroup.WithContext(ctx)\n\tvar statusCode int64\n\tfor _, ctr := range containers {\n\t\teg.Go(func() error {\n\t\t\tvar err error\n\t\t\tres := s.apiClient().ContainerWait(waitCtx, ctr.ID, client.ContainerWaitOptions{})\n\t\t\tselect {\n\t\t\tcase result := <-res.Result:\n\t\t\t\t_, _ = fmt.Fprintf(s.stdout(), \"container %q exited with status code %d\\n\", ctr.ID, result.StatusCode)\n\t\t\t\tstatusCode = result.StatusCode\n\t\t\tcase err = <-res.Error:\n\t\t\t}\n\t\t\treturn err\n\t\t})\n\t}\n\n\terr = eg.Wait()\n\tif err != nil {\n\t\treturn 42, err // Ignore abort flag in case of error in wait\n\t}\n\n\tif options.DownProjectOnContainerExit {\n\t\treturn statusCode, s.Down(ctx, projectName, api.DownOptions{\n\t\t\tRemoveOrphans: true,\n\t\t})\n\t}\n\n\treturn statusCode, err\n}\n"
  },
  {
    "path": "pkg/compose/watch.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n       http://www.apache.org/licenses/LICENSE-2.0\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"slices\"\n\t\"strconv\"\n\t\"strings\"\n\tgsync \"sync\"\n\t\"time\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/compose-spec/compose-go/v2/utils\"\n\tccli \"github.com/docker/cli/cli/command/container\"\n\t\"github.com/go-viper/mapstructure/v2\"\n\t\"github.com/moby/buildkit/util/progress/progressui\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/sirupsen/logrus\"\n\t\"golang.org/x/sync/errgroup\"\n\n\tpathutil \"github.com/docker/compose/v5/internal/paths\"\n\t\"github.com/docker/compose/v5/internal/sync\"\n\t\"github.com/docker/compose/v5/internal/tracing\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\tcutils \"github.com/docker/compose/v5/pkg/utils\"\n\t\"github.com/docker/compose/v5/pkg/watch\"\n)\n\ntype WatchFunc func(ctx context.Context, project *types.Project, options api.WatchOptions) (func() error, error)\n\ntype Watcher struct {\n\tproject *types.Project\n\toptions api.WatchOptions\n\twatchFn WatchFunc\n\tstopFn  func()\n\terrCh   chan error\n}\n\nfunc NewWatcher(project *types.Project, options api.UpOptions, w WatchFunc, consumer api.LogConsumer) (*Watcher, error) {\n\tfor i := range project.Services {\n\t\tservice := project.Services[i]\n\n\t\tif service.Develop != nil && service.Develop.Watch != nil {\n\t\t\tbuild := options.Create.Build\n\t\t\treturn &Watcher{\n\t\t\t\tproject: project,\n\t\t\t\toptions: api.WatchOptions{\n\t\t\t\t\tLogTo: consumer,\n\t\t\t\t\tBuild: build,\n\t\t\t\t},\n\t\t\t\twatchFn: w,\n\t\t\t\terrCh:   make(chan error),\n\t\t\t}, nil\n\t\t}\n\t}\n\t// none of the services is eligible to watch\n\treturn nil, fmt.Errorf(\"none of the selected services is configured for watch, see https://docs.docker.com/compose/how-tos/file-watch/\")\n}\n\n// ensure state changes are atomic\nvar mx gsync.Mutex\n\nfunc (w *Watcher) Start(ctx context.Context) error {\n\tmx.Lock()\n\tdefer mx.Unlock()\n\tctx, cancelFunc := context.WithCancel(ctx)\n\tw.stopFn = cancelFunc\n\twait, err := w.watchFn(ctx, w.project, w.options)\n\tif err != nil {\n\t\tgo func() {\n\t\t\tw.errCh <- err\n\t\t}()\n\t\treturn err\n\t}\n\tgo func() {\n\t\tw.errCh <- wait()\n\t}()\n\treturn nil\n}\n\nfunc (w *Watcher) Stop() error {\n\tmx.Lock()\n\tdefer mx.Unlock()\n\tif w.stopFn == nil {\n\t\treturn nil\n\t}\n\tw.stopFn()\n\tw.stopFn = nil\n\terr := <-w.errCh\n\treturn err\n}\n\n// getSyncImplementation returns an appropriate sync implementation for the\n// project.\n//\n// Currently, an implementation that batches files and transfers them using\n// the Moby `Untar` API.\nfunc (s *composeService) getSyncImplementation(project *types.Project) (sync.Syncer, error) {\n\tvar useTar bool\n\tif useTarEnv, ok := os.LookupEnv(\"COMPOSE_EXPERIMENTAL_WATCH_TAR\"); ok {\n\t\tuseTar, _ = strconv.ParseBool(useTarEnv)\n\t} else {\n\t\tuseTar = true\n\t}\n\tif !useTar {\n\t\treturn nil, errors.New(\"no available sync implementation\")\n\t}\n\n\treturn sync.NewTar(project.Name, tarDockerClient{s: s}), nil\n}\n\nfunc (s *composeService) Watch(ctx context.Context, project *types.Project, options api.WatchOptions) error {\n\twait, err := s.watch(ctx, project, options)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn wait()\n}\n\ntype watchRule struct {\n\ttypes.Trigger\n\tinclude watch.PathMatcher\n\tignore  watch.PathMatcher\n\tservice string\n}\n\nfunc (r watchRule) Matches(event watch.FileEvent) *sync.PathMapping {\n\thostPath := string(event)\n\tif !pathutil.IsChild(r.Path, hostPath) {\n\t\treturn nil\n\t}\n\tincluded, err := r.include.Matches(hostPath)\n\tif err != nil {\n\t\tlogrus.Warnf(\"error include matching %q: %v\", hostPath, err)\n\t\treturn nil\n\t}\n\tif !included {\n\t\tlogrus.Debugf(\"%s is not matching include pattern\", hostPath)\n\t\treturn nil\n\t}\n\tisIgnored, err := r.ignore.Matches(hostPath)\n\tif err != nil {\n\t\tlogrus.Warnf(\"error ignore matching %q: %v\", hostPath, err)\n\t\treturn nil\n\t}\n\n\tif isIgnored {\n\t\tlogrus.Debugf(\"%s is matching ignore pattern\", hostPath)\n\t\treturn nil\n\t}\n\n\tvar containerPath string\n\tif r.Target != \"\" {\n\t\trel, err := filepath.Rel(r.Path, hostPath)\n\t\tif err != nil {\n\t\t\tlogrus.Warnf(\"error making %s relative to %s: %v\", hostPath, r.Path, err)\n\t\t\treturn nil\n\t\t}\n\t\t// always use Unix-style paths for inside the container\n\t\tcontainerPath = path.Join(r.Target, filepath.ToSlash(rel))\n\t}\n\treturn &sync.PathMapping{\n\t\tHostPath:      hostPath,\n\t\tContainerPath: containerPath,\n\t}\n}\n\nfunc (s *composeService) watch(ctx context.Context, project *types.Project, options api.WatchOptions) (func() error, error) { //nolint: gocyclo\n\tvar err error\n\tif project, err = project.WithSelectedServices(options.Services); err != nil {\n\t\treturn nil, err\n\t}\n\tsyncer, err := s.getSyncImplementation(project)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\teg, ctx := errgroup.WithContext(ctx)\n\n\tvar (\n\t\trules []watchRule\n\t\tpaths []string\n\t)\n\tfor serviceName, service := range project.Services {\n\t\tconfig, err := loadDevelopmentConfig(service, project)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tif service.Develop != nil {\n\t\t\tconfig = service.Develop\n\t\t}\n\n\t\tif config == nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tfor _, trigger := range config.Watch {\n\t\t\tif trigger.Action == types.WatchActionRebuild {\n\t\t\t\tif service.Build == nil {\n\t\t\t\t\treturn nil, fmt.Errorf(\"can't watch service %q with action %s without a build context\", service.Name, types.WatchActionRebuild)\n\t\t\t\t}\n\t\t\t\tif options.Build == nil {\n\t\t\t\t\treturn nil, fmt.Errorf(\"--no-build is incompatible with watch action %s in service %s\", types.WatchActionRebuild, service.Name)\n\t\t\t\t}\n\t\t\t\t// set the service to always be built - watch triggers `Up()` when it receives a rebuild event\n\t\t\t\tservice.PullPolicy = types.PullPolicyBuild\n\t\t\t\tproject.Services[serviceName] = service\n\t\t\t}\n\t\t}\n\n\t\tfor _, trigger := range config.Watch {\n\t\t\tif isSync(trigger) && checkIfPathAlreadyBindMounted(trigger.Path, service.Volumes) {\n\t\t\t\tlogrus.Warnf(\"path '%s' also declared by a bind mount volume, this path won't be monitored!\\n\", trigger.Path)\n\t\t\t\tcontinue\n\t\t\t} else {\n\t\t\t\tshouldInitialSync := trigger.InitialSync\n\n\t\t\t\t// Check legacy extension attribute for backward compatibility\n\t\t\t\tif !shouldInitialSync {\n\t\t\t\t\tvar legacyInitialSync bool\n\t\t\t\t\tsuccess, err := trigger.Extensions.Get(\"x-initialSync\", &legacyInitialSync)\n\t\t\t\t\tif err == nil && success && legacyInitialSync {\n\t\t\t\t\t\tshouldInitialSync = true\n\t\t\t\t\t\tlogrus.Warnf(\"x-initialSync is DEPRECATED, please use the official `initial_sync` attribute\\n\")\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif shouldInitialSync && isSync(trigger) {\n\t\t\t\t\t// Need to check initial files are in container that are meant to be synced from watch action\n\t\t\t\t\terr := s.initialSync(ctx, project, service, trigger, syncer)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn nil, err\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tpaths = append(paths, trigger.Path)\n\t\t}\n\n\t\tserviceWatchRules, err := getWatchRules(config, service)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\trules = append(rules, serviceWatchRules...)\n\t}\n\n\tif len(paths) == 0 {\n\t\treturn nil, fmt.Errorf(\"none of the selected services is configured for watch, consider setting a 'develop' section\")\n\t}\n\n\twatcher, err := watch.NewWatcher(paths)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\terr = watcher.Start()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\teg.Go(func() error {\n\t\treturn s.watchEvents(ctx, project, options, watcher, syncer, rules)\n\t})\n\toptions.LogTo.Log(api.WatchLogger, \"Watch enabled\")\n\n\treturn func() error {\n\t\terr := eg.Wait()\n\t\tif werr := watcher.Close(); werr != nil {\n\t\t\tlogrus.Debugf(\"Error closing Watcher: %v\", werr)\n\t\t}\n\t\treturn err\n\t}, nil\n}\n\nfunc getWatchRules(config *types.DevelopConfig, service types.ServiceConfig) ([]watchRule, error) {\n\tvar rules []watchRule\n\n\tdockerIgnores, err := watch.LoadDockerIgnore(service.Build)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// add a hardcoded set of ignores on top of what came from .dockerignore\n\t// some of this should likely be configurable (e.g. there could be cases\n\t// where you want `.git` to be synced) but this is suitable for now\n\tdotGitIgnore, err := watch.NewDockerPatternMatcher(\"/\", []string{\".git/\"})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tfor _, trigger := range config.Watch {\n\t\tignore, err := watch.NewDockerPatternMatcher(trigger.Path, trigger.Ignore)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tvar include watch.PathMatcher\n\t\tif len(trigger.Include) == 0 {\n\t\t\tinclude = watch.AnyMatcher{}\n\t\t} else {\n\t\t\tinclude, err = watch.NewDockerPatternMatcher(trigger.Path, trigger.Include)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\n\t\trules = append(rules, watchRule{\n\t\t\tTrigger: trigger,\n\t\t\tinclude: include,\n\t\t\tignore: watch.NewCompositeMatcher(\n\t\t\t\tdockerIgnores,\n\t\t\t\twatch.EphemeralPathMatcher(),\n\t\t\t\tdotGitIgnore,\n\t\t\t\tignore,\n\t\t\t),\n\t\t\tservice: service.Name,\n\t\t})\n\t}\n\treturn rules, nil\n}\n\nfunc isSync(trigger types.Trigger) bool {\n\treturn trigger.Action == types.WatchActionSync || trigger.Action == types.WatchActionSyncRestart\n}\n\nfunc (s *composeService) watchEvents(ctx context.Context, project *types.Project, options api.WatchOptions, watcher watch.Notify, syncer sync.Syncer, rules []watchRule) error {\n\tctx, cancel := context.WithCancel(ctx)\n\tdefer cancel()\n\n\t// debounce and group filesystem events so that we capture IDE saving many files as one \"batch\" event\n\tbatchEvents := watch.BatchDebounceEvents(ctx, s.clock, watcher.Events())\n\n\tfor {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\toptions.LogTo.Log(api.WatchLogger, \"Watch disabled\")\n\t\t\t// Ensure watcher is closed to release resources\n\t\t\t_ = watcher.Close()\n\t\t\treturn nil\n\t\tcase err, open := <-watcher.Errors():\n\t\t\tif err != nil {\n\t\t\t\toptions.LogTo.Err(api.WatchLogger, \"Watch disabled with errors: \"+err.Error())\n\t\t\t}\n\t\t\tif open {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\t_ = watcher.Close()\n\t\t\treturn err\n\t\tcase batch, ok := <-batchEvents:\n\t\t\tif !ok {\n\t\t\t\toptions.LogTo.Log(api.WatchLogger, \"Watch disabled\")\n\t\t\t\t_ = watcher.Close()\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tif len(batch) > 1000 {\n\t\t\t\tlogrus.Warnf(\"Very large batch of file changes detected: %d files. This may impact performance.\", len(batch))\n\t\t\t\toptions.LogTo.Log(api.WatchLogger, \"Large batch of file changes detected. If you just switched branches, this is expected.\")\n\t\t\t}\n\t\t\tstart := time.Now()\n\t\t\tlogrus.Debugf(\"batch start: count[%d]\", len(batch))\n\t\t\terr := s.handleWatchBatch(ctx, project, options, batch, rules, syncer)\n\t\t\tif err != nil {\n\t\t\t\tlogrus.Warnf(\"Error handling changed files: %v\", err)\n\t\t\t\t// If context was canceled, exit immediately\n\t\t\t\tif ctx.Err() != nil {\n\t\t\t\t\t_ = watcher.Close()\n\t\t\t\t\treturn ctx.Err()\n\t\t\t\t}\n\t\t\t}\n\t\t\tlogrus.Debugf(\"batch complete: duration[%s] count[%d]\", time.Since(start), len(batch))\n\t\t}\n\t}\n}\n\nfunc loadDevelopmentConfig(service types.ServiceConfig, project *types.Project) (*types.DevelopConfig, error) {\n\tvar config types.DevelopConfig\n\ty, ok := service.Extensions[\"x-develop\"]\n\tif !ok {\n\t\treturn nil, nil\n\t}\n\tlogrus.Warnf(\"x-develop is DEPRECATED, please use the official `develop` attribute\")\n\terr := mapstructure.Decode(y, &config)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tbaseDir, err := filepath.EvalSymlinks(project.WorkingDir)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"resolving symlink for %q: %w\", project.WorkingDir, err)\n\t}\n\n\tfor i, trigger := range config.Watch {\n\t\tif !filepath.IsAbs(trigger.Path) {\n\t\t\ttrigger.Path = filepath.Join(baseDir, trigger.Path)\n\t\t}\n\t\tif p, err := filepath.EvalSymlinks(trigger.Path); err == nil {\n\t\t\t// this might fail because the path doesn't exist, etc.\n\t\t\ttrigger.Path = p\n\t\t}\n\t\ttrigger.Path = filepath.Clean(trigger.Path)\n\t\tif trigger.Path == \"\" {\n\t\t\treturn nil, errors.New(\"watch rules MUST define a path\")\n\t\t}\n\n\t\tif trigger.Action == types.WatchActionRebuild && service.Build == nil {\n\t\t\treturn nil, fmt.Errorf(\"service %s doesn't have a build section, can't apply %s on watch\", types.WatchActionRebuild, service.Name)\n\t\t}\n\t\tif trigger.Action == types.WatchActionSyncExec && len(trigger.Exec.Command) == 0 {\n\t\t\treturn nil, fmt.Errorf(\"can't watch with action %q on service %s without a command\", types.WatchActionSyncExec, service.Name)\n\t\t}\n\n\t\tconfig.Watch[i] = trigger\n\t}\n\treturn &config, nil\n}\n\nfunc checkIfPathAlreadyBindMounted(watchPath string, volumes []types.ServiceVolumeConfig) bool {\n\tfor _, volume := range volumes {\n\t\tif volume.Bind != nil {\n\t\t\trelPath, err := filepath.Rel(volume.Source, watchPath)\n\t\t\tif err == nil && !strings.HasPrefix(relPath, \"..\") {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\treturn false\n}\n\ntype tarDockerClient struct {\n\ts *composeService\n}\n\nfunc (t tarDockerClient) ContainersForService(ctx context.Context, projectName string, serviceName string) ([]container.Summary, error) {\n\tcontainers, err := t.s.getContainers(ctx, projectName, oneOffExclude, true, serviceName)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn containers, nil\n}\n\nfunc (t tarDockerClient) Exec(ctx context.Context, containerID string, cmd []string, in io.Reader) error {\n\texecCreateResp, err := t.s.apiClient().ExecCreate(ctx, containerID, client.ExecCreateOptions{\n\t\tCmd:          cmd,\n\t\tAttachStdout: false,\n\t\tAttachStderr: true,\n\t\tAttachStdin:  in != nil,\n\t\tTTY:          false,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tconn, err := t.s.apiClient().ExecAttach(ctx, execCreateResp.ID, client.ExecAttachOptions{\n\t\tTTY: false,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer conn.Close()\n\n\tvar eg errgroup.Group\n\tif in != nil {\n\t\teg.Go(func() error {\n\t\t\tdefer func() {\n\t\t\t\t_ = conn.CloseWrite()\n\t\t\t}()\n\t\t\t_, err := io.Copy(conn.Conn, in)\n\t\t\treturn err\n\t\t})\n\t}\n\teg.Go(func() error {\n\t\t_, err := io.Copy(t.s.stdout(), conn.Reader)\n\t\treturn err\n\t})\n\n\t_, err = t.s.apiClient().ExecStart(ctx, execCreateResp.ID, client.ExecStartOptions{\n\t\tTTY:    false,\n\t\tDetach: false,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// although the errgroup is not tied directly to the context, the operations\n\t// in it are reading/writing to the connection, which is tied to the context,\n\t// so they won't block indefinitely\n\tif err := eg.Wait(); err != nil {\n\t\treturn err\n\t}\n\n\texecResult, err := t.s.apiClient().ExecInspect(ctx, execCreateResp.ID, client.ExecInspectOptions{})\n\tif err != nil {\n\t\treturn err\n\t}\n\tif execResult.Running {\n\t\treturn errors.New(\"process still running\")\n\t}\n\tif execResult.ExitCode != 0 {\n\t\treturn fmt.Errorf(\"exit code %d\", execResult.ExitCode)\n\t}\n\treturn nil\n}\n\nfunc (t tarDockerClient) Untar(ctx context.Context, id string, archive io.ReadCloser) error {\n\t_, err := t.s.apiClient().CopyToContainer(ctx, id, client.CopyToContainerOptions{\n\t\tDestinationPath: \"/\",\n\t\tContent:         archive,\n\t\tCopyUIDGID:      true,\n\t})\n\treturn err\n}\n\n//nolint:gocyclo\nfunc (s *composeService) handleWatchBatch(ctx context.Context, project *types.Project, options api.WatchOptions, batch []watch.FileEvent, rules []watchRule, syncer sync.Syncer) error {\n\tvar (\n\t\trestart   = map[string]bool{}\n\t\tsyncfiles = map[string][]*sync.PathMapping{}\n\t\texec      = map[string][]int{}\n\t\trebuild   = map[string]bool{}\n\t)\n\tfor _, event := range batch {\n\t\tfor i, rule := range rules {\n\t\t\tmapping := rule.Matches(event)\n\t\t\tif mapping == nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tswitch rule.Action {\n\t\t\tcase types.WatchActionRebuild:\n\t\t\t\trebuild[rule.service] = true\n\t\t\tcase types.WatchActionSync:\n\t\t\t\tsyncfiles[rule.service] = append(syncfiles[rule.service], mapping)\n\t\t\tcase types.WatchActionRestart:\n\t\t\t\trestart[rule.service] = true\n\t\t\tcase types.WatchActionSyncRestart:\n\t\t\t\tsyncfiles[rule.service] = append(syncfiles[rule.service], mapping)\n\t\t\t\trestart[rule.service] = true\n\t\t\tcase types.WatchActionSyncExec:\n\t\t\t\tsyncfiles[rule.service] = append(syncfiles[rule.service], mapping)\n\t\t\t\t// We want to run exec hooks only once after syncfiles if multiple file events match\n\t\t\t\t// as we can't compare ServiceHook to sort and compact a slice, collect rule indexes\n\t\t\t\texec[rule.service] = append(exec[rule.service], i)\n\t\t\t}\n\t\t}\n\t}\n\n\tlogrus.Debugf(\"watch actions: rebuild %d sync %d restart %d\", len(rebuild), len(syncfiles), len(restart))\n\n\tif len(rebuild) > 0 {\n\t\terr := s.rebuild(ctx, project, utils.MapKeys(rebuild), options)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tfor serviceName, pathMappings := range syncfiles {\n\t\twriteWatchSyncMessage(options.LogTo, serviceName, pathMappings)\n\t\terr := syncer.Sync(ctx, serviceName, pathMappings)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tif len(restart) > 0 {\n\t\tservices := utils.MapKeys(restart)\n\t\terr := s.restart(ctx, project.Name, api.RestartOptions{\n\t\t\tServices: services,\n\t\t\tProject:  project,\n\t\t\tNoDeps:   false,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\toptions.LogTo.Log(\n\t\t\tapi.WatchLogger,\n\t\t\tfmt.Sprintf(\"service(s) %q restarted\", services))\n\t}\n\n\teg, ctx := errgroup.WithContext(ctx)\n\tfor service, rulesToExec := range exec {\n\t\tslices.Sort(rulesToExec)\n\t\tfor _, i := range slices.Compact(rulesToExec) {\n\t\t\terr := s.exec(ctx, project, service, rules[i].Exec, eg)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\treturn eg.Wait()\n}\n\nfunc (s *composeService) exec(ctx context.Context, project *types.Project, serviceName string, x types.ServiceHook, eg *errgroup.Group) error {\n\tcontainers, err := s.getContainers(ctx, project.Name, oneOffExclude, false, serviceName)\n\tif err != nil {\n\t\treturn err\n\t}\n\tfor _, c := range containers {\n\t\teg.Go(func() error {\n\t\t\texec := ccli.NewExecOptions()\n\t\t\texec.User = x.User\n\t\t\texec.Privileged = x.Privileged\n\t\t\texec.Command = x.Command\n\t\t\texec.Workdir = x.WorkingDir\n\t\t\texec.DetachKeys = s.configFile().DetachKeys\n\t\t\tfor _, v := range x.Environment.ToMapping().Values() {\n\t\t\t\terr := exec.Env.Set(v)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn ccli.RunExec(ctx, s.dockerCli, c.ID, exec)\n\t\t})\n\t}\n\treturn nil\n}\n\nfunc (s *composeService) rebuild(ctx context.Context, project *types.Project, services []string, options api.WatchOptions) error {\n\toptions.LogTo.Log(api.WatchLogger, fmt.Sprintf(\"Rebuilding service(s) %q after changes were detected...\", services))\n\t// restrict the build to ONLY this service, not any of its dependencies\n\toptions.Build.Services = services\n\toptions.Build.Progress = string(progressui.PlainMode)\n\toptions.Build.Out = cutils.GetWriter(func(line string) {\n\t\toptions.LogTo.Log(api.WatchLogger, line)\n\t})\n\n\tvar (\n\t\timageNameToIdMap map[string]string\n\t\terr              error\n\t)\n\terr = tracing.SpanWrapFunc(\"project/build\", tracing.ProjectOptions(ctx, project),\n\t\tfunc(ctx context.Context) error {\n\t\t\timageNameToIdMap, err = s.build(ctx, project, *options.Build, nil)\n\t\t\treturn err\n\t\t})(ctx)\n\tif err != nil {\n\t\toptions.LogTo.Log(api.WatchLogger, fmt.Sprintf(\"Build failed. Error: %v\", err))\n\t\treturn err\n\t}\n\n\tif options.Prune {\n\t\ts.pruneDanglingImagesOnRebuild(ctx, project.Name, imageNameToIdMap)\n\t}\n\n\toptions.LogTo.Log(api.WatchLogger, fmt.Sprintf(\"service(s) %q successfully built\", services))\n\n\terr = s.create(ctx, project, api.CreateOptions{\n\t\tServices: services,\n\t\tInherit:  true,\n\t\tRecreate: api.RecreateForce,\n\t})\n\tif err != nil {\n\t\toptions.LogTo.Log(api.WatchLogger, fmt.Sprintf(\"Failed to recreate services after update. Error: %v\", err))\n\t\treturn err\n\t}\n\n\tp, err := project.WithSelectedServices(services, types.IncludeDependents)\n\tif err != nil {\n\t\treturn err\n\t}\n\terr = s.start(ctx, project.Name, api.StartOptions{\n\t\tProject:  p,\n\t\tServices: services,\n\t\tAttachTo: services,\n\t}, nil)\n\tif err != nil {\n\t\toptions.LogTo.Log(api.WatchLogger, fmt.Sprintf(\"Application failed to start after update. Error: %v\", err))\n\t}\n\treturn nil\n}\n\n// writeWatchSyncMessage prints out a message about the sync for the changed paths.\nfunc writeWatchSyncMessage(log api.LogConsumer, serviceName string, pathMappings []*sync.PathMapping) {\n\tif logrus.IsLevelEnabled(logrus.DebugLevel) {\n\t\thostPathsToSync := make([]string, len(pathMappings))\n\t\tfor i := range pathMappings {\n\t\t\thostPathsToSync[i] = pathMappings[i].HostPath\n\t\t}\n\t\tlog.Log(\n\t\t\tapi.WatchLogger,\n\t\t\tfmt.Sprintf(\n\t\t\t\t\"Syncing service %q after changes were detected: %s\",\n\t\t\t\tserviceName,\n\t\t\t\tstrings.Join(hostPathsToSync, \", \"),\n\t\t\t),\n\t\t)\n\t} else {\n\t\tlog.Log(\n\t\t\tapi.WatchLogger,\n\t\t\tfmt.Sprintf(\"Syncing service %q after %d changes were detected\", serviceName, len(pathMappings)),\n\t\t)\n\t}\n}\n\nfunc (s *composeService) pruneDanglingImagesOnRebuild(ctx context.Context, projectName string, imageNameToIdMap map[string]string) {\n\timages, err := s.apiClient().ImageList(ctx, client.ImageListOptions{\n\t\tFilters: projectFilter(projectName).Add(\"dangling\", \"true\"),\n\t})\n\tif err != nil {\n\t\tlogrus.Debugf(\"Failed to list images: %v\", err)\n\t\treturn\n\t}\n\n\tfor _, img := range images.Items {\n\t\tif _, ok := imageNameToIdMap[img.ID]; !ok {\n\t\t\t_, err := s.apiClient().ImageRemove(ctx, img.ID, client.ImageRemoveOptions{})\n\t\t\tif err != nil {\n\t\t\t\tlogrus.Debugf(\"Failed to remove image %s: %v\", img.ID, err)\n\t\t\t}\n\t\t}\n\t}\n}\n\n// Walks develop.watch.path and checks which files should be copied inside the container\n// ignores develop.watch.ignore, Dockerfile, compose files, bind mounted paths and .git\nfunc (s *composeService) initialSync(ctx context.Context, project *types.Project, service types.ServiceConfig, trigger types.Trigger, syncer sync.Syncer) error {\n\tdockerIgnores, err := watch.LoadDockerIgnore(service.Build)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tdotGitIgnore, err := watch.NewDockerPatternMatcher(\"/\", []string{\".git/\"})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\ttriggerIgnore, err := watch.NewDockerPatternMatcher(trigger.Path, trigger.Ignore)\n\tif err != nil {\n\t\treturn err\n\t}\n\t// FIXME .dockerignore\n\tignoreInitialSync := watch.NewCompositeMatcher(\n\t\tdockerIgnores,\n\t\twatch.EphemeralPathMatcher(),\n\t\tdotGitIgnore,\n\t\ttriggerIgnore)\n\n\tpathsToCopy, err := s.initialSyncFiles(ctx, project, service, trigger, ignoreInitialSync)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn syncer.Sync(ctx, service.Name, pathsToCopy)\n}\n\n// Syncs files from develop.watch.path if thy have been modified after the image has been created\n//\n//nolint:gocyclo\nfunc (s *composeService) initialSyncFiles(ctx context.Context, project *types.Project, service types.ServiceConfig, trigger types.Trigger, ignore watch.PathMatcher) ([]*sync.PathMapping, error) {\n\tfi, err := os.Stat(trigger.Path)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\ttimeImageCreated, err := s.imageCreatedTime(ctx, project, service.Name)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tvar pathsToCopy []*sync.PathMapping\n\tswitch mode := fi.Mode(); {\n\tcase mode.IsDir():\n\t\t// process directory\n\t\terr = filepath.WalkDir(trigger.Path, func(path string, d fs.DirEntry, err error) error {\n\t\t\tif err != nil {\n\t\t\t\t// handle possible path err, just in case...\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif trigger.Path == path {\n\t\t\t\t// walk starts at the root directory\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tif shouldIgnore(filepath.Base(path), ignore) || checkIfPathAlreadyBindMounted(path, service.Volumes) {\n\t\t\t\t// By definition sync ignores bind mounted paths\n\t\t\t\tif d.IsDir() {\n\t\t\t\t\t// skip folder\n\t\t\t\t\treturn fs.SkipDir\n\t\t\t\t}\n\t\t\t\treturn nil // skip file\n\t\t\t}\n\t\t\tinfo, err := d.Info()\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif !d.IsDir() {\n\t\t\t\tif info.ModTime().Before(timeImageCreated) {\n\t\t\t\t\t// skip file if it was modified before image creation\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\t\t\t\trel, err := filepath.Rel(trigger.Path, path)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\t// only copy files (and not full directories)\n\t\t\t\tpathsToCopy = append(pathsToCopy, &sync.PathMapping{\n\t\t\t\t\tHostPath:      path,\n\t\t\t\t\tContainerPath: filepath.Join(trigger.Target, rel),\n\t\t\t\t})\n\t\t\t}\n\t\t\treturn nil\n\t\t})\n\tcase mode.IsRegular():\n\t\t// process file\n\t\tif fi.ModTime().After(timeImageCreated) && !shouldIgnore(filepath.Base(trigger.Path), ignore) && !checkIfPathAlreadyBindMounted(trigger.Path, service.Volumes) {\n\t\t\tpathsToCopy = append(pathsToCopy, &sync.PathMapping{\n\t\t\t\tHostPath:      trigger.Path,\n\t\t\t\tContainerPath: trigger.Target,\n\t\t\t})\n\t\t}\n\t}\n\treturn pathsToCopy, err\n}\n\nfunc shouldIgnore(name string, ignore watch.PathMatcher) bool {\n\tshouldIgnore, _ := ignore.Matches(name)\n\t// ignore files that match any ignore pattern\n\treturn shouldIgnore\n}\n\n// gets the image creation time for a service\nfunc (s *composeService) imageCreatedTime(ctx context.Context, project *types.Project, serviceName string) (time.Time, error) {\n\tres, err := s.apiClient().ContainerList(ctx, client.ContainerListOptions{\n\t\tAll:     true,\n\t\tFilters: projectFilter(project.Name).Add(\"label\", serviceFilter(serviceName)),\n\t})\n\tif err != nil {\n\t\treturn time.Now(), err\n\t}\n\tif len(res.Items) == 0 {\n\t\treturn time.Now(), fmt.Errorf(\"could not get created time for service's image\")\n\t}\n\n\timg, err := s.apiClient().ImageInspect(ctx, res.Items[0].ImageID)\n\tif err != nil {\n\t\treturn time.Now(), err\n\t}\n\t// Need to get the oldest one?\n\ttimeCreated, err := time.Parse(time.RFC3339Nano, img.Created)\n\tif err != nil {\n\t\treturn time.Now(), err\n\t}\n\treturn timeCreated, nil\n}\n"
  },
  {
    "path": "pkg/compose/watch_test.go",
    "content": "/*\n\n   Copyright 2020 Docker Compose CLI authors\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n       http://www.apache.org/licenses/LICENSE-2.0\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage compose\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/streams\"\n\t\"github.com/jonboulle/clockwork\"\n\t\"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/image\"\n\t\"github.com/moby/moby/client\"\n\t\"github.com/stretchr/testify/require\"\n\t\"go.uber.org/mock/gomock\"\n\t\"gotest.tools/v3/assert\"\n\n\t\"github.com/docker/compose/v5/internal/sync\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n\t\"github.com/docker/compose/v5/pkg/mocks\"\n\t\"github.com/docker/compose/v5/pkg/watch\"\n)\n\ntype testWatcher struct {\n\tevents chan watch.FileEvent\n\terrors chan error\n}\n\nfunc (t testWatcher) Start() error {\n\treturn nil\n}\n\nfunc (t testWatcher) Close() error {\n\treturn nil\n}\n\nfunc (t testWatcher) Events() chan watch.FileEvent {\n\treturn t.events\n}\n\nfunc (t testWatcher) Errors() chan error {\n\treturn t.errors\n}\n\ntype stdLogger struct{}\n\nfunc (s stdLogger) Log(containerName, message string) {\n\tfmt.Printf(\"%s: %s\\n\", containerName, message)\n}\n\nfunc (s stdLogger) Err(containerName, message string) {\n\tfmt.Fprintf(os.Stderr, \"%s: %s\\n\", containerName, message)\n}\n\nfunc (s stdLogger) Status(containerName, msg string) {\n\tfmt.Printf(\"%s: %s\\n\", containerName, msg)\n}\n\nfunc TestWatch_Sync(t *testing.T) {\n\tmockCtrl := gomock.NewController(t)\n\tcli := mocks.NewMockCli(mockCtrl)\n\tcli.EXPECT().Err().Return(streams.NewOut(os.Stderr)).AnyTimes()\n\tapiClient := mocks.NewMockAPIClient(mockCtrl)\n\tapiClient.EXPECT().ContainerList(gomock.Any(), gomock.Any()).Return(client.ContainerListResult{\n\t\tItems: []container.Summary{\n\t\t\ttestContainer(\"test\", \"123\", false),\n\t\t},\n\t}, nil).AnyTimes()\n\t// we expect the image to be pruned\n\tapiClient.EXPECT().ImageList(gomock.Any(), client.ImageListOptions{\n\t\tFilters: make(client.Filters).\n\t\t\tAdd(\"dangling\", \"true\").\n\t\t\tAdd(\"label\", api.ProjectLabel+\"=myProjectName\"),\n\t}).Return(client.ImageListResult{\n\t\tItems: []image.Summary{\n\t\t\t{ID: \"123\"},\n\t\t\t{ID: \"456\"},\n\t\t},\n\t}, nil).Times(1)\n\tapiClient.EXPECT().ImageRemove(gomock.Any(), \"123\", client.ImageRemoveOptions{}).Times(1)\n\tapiClient.EXPECT().ImageRemove(gomock.Any(), \"456\", client.ImageRemoveOptions{}).Times(1)\n\t//\n\tcli.EXPECT().Client().Return(apiClient).AnyTimes()\n\n\tctx, cancelFunc := context.WithCancel(t.Context())\n\tt.Cleanup(cancelFunc)\n\n\tproj := types.Project{\n\t\tName: \"myProjectName\",\n\t\tServices: types.Services{\n\t\t\t\"test\": {\n\t\t\t\tName: \"test\",\n\t\t\t},\n\t\t},\n\t}\n\n\twatcher := testWatcher{\n\t\tevents: make(chan watch.FileEvent),\n\t\terrors: make(chan error),\n\t}\n\n\tsyncer := newFakeSyncer()\n\tclock := clockwork.NewFakeClock()\n\tgo func() {\n\t\tservice := composeService{\n\t\t\tdockerCli: cli,\n\t\t\tclock:     clock,\n\t\t}\n\t\trules, err := getWatchRules(&types.DevelopConfig{\n\t\t\tWatch: []types.Trigger{\n\t\t\t\t{\n\t\t\t\t\tPath:   \"/sync\",\n\t\t\t\t\tAction: \"sync\",\n\t\t\t\t\tTarget: \"/work\",\n\t\t\t\t\tIgnore: []string{\"ignore\"},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tPath:   \"/rebuild\",\n\t\t\t\t\tAction: \"rebuild\",\n\t\t\t\t},\n\t\t\t},\n\t\t}, types.ServiceConfig{Name: \"test\"})\n\t\tassert.NilError(t, err)\n\n\t\terr = service.watchEvents(ctx, &proj, api.WatchOptions{\n\t\t\tBuild: &api.BuildOptions{},\n\t\t\tLogTo: stdLogger{},\n\t\t\tPrune: true,\n\t\t}, watcher, syncer, rules)\n\t\tassert.NilError(t, err)\n\t}()\n\n\twatcher.Events() <- watch.NewFileEvent(\"/sync/changed\")\n\twatcher.Events() <- watch.NewFileEvent(\"/sync/changed/sub\")\n\terr := clock.BlockUntilContext(ctx, 3)\n\tassert.NilError(t, err)\n\tclock.Advance(watch.QuietPeriod)\n\tselect {\n\tcase actual := <-syncer.synced:\n\t\trequire.ElementsMatch(t, []*sync.PathMapping{\n\t\t\t{HostPath: \"/sync/changed\", ContainerPath: \"/work/changed\"},\n\t\t\t{HostPath: \"/sync/changed/sub\", ContainerPath: \"/work/changed/sub\"},\n\t\t}, actual)\n\tcase <-time.After(100 * time.Millisecond):\n\t\tt.Error(\"timeout\")\n\t}\n\n\twatcher.Events() <- watch.NewFileEvent(\"/rebuild\")\n\twatcher.Events() <- watch.NewFileEvent(\"/sync/changed\")\n\terr = clock.BlockUntilContext(ctx, 4)\n\tassert.NilError(t, err)\n\tclock.Advance(watch.QuietPeriod)\n\tselect {\n\tcase batch := <-syncer.synced:\n\t\tt.Fatalf(\"received unexpected events: %v\", batch)\n\tcase <-time.After(100 * time.Millisecond):\n\t\t// expected\n\t}\n\t// TODO: there's not a great way to assert that the rebuild attempt happened\n}\n\ntype fakeSyncer struct {\n\tsynced chan []*sync.PathMapping\n}\n\nfunc newFakeSyncer() *fakeSyncer {\n\treturn &fakeSyncer{\n\t\tsynced: make(chan []*sync.PathMapping),\n\t}\n}\n\nfunc (f *fakeSyncer) Sync(ctx context.Context, service string, paths []*sync.PathMapping) error {\n\tf.synced <- paths\n\treturn nil\n}\n"
  },
  {
    "path": "pkg/dryrun/dryrunclient.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage dryrun\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/rand\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n\t\"runtime\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"github.com/docker/buildx/builder\"\n\t\"github.com/docker/buildx/util/imagetools\"\n\t\"github.com/docker/cli/cli/command\"\n\tcontainerType \"github.com/moby/moby/api/types/container\"\n\t\"github.com/moby/moby/api/types/image\"\n\t\"github.com/moby/moby/api/types/jsonstream\"\n\t\"github.com/moby/moby/api/types/volume\"\n\t\"github.com/moby/moby/client\"\n)\n\nvar _ client.APIClient = &DryRunClient{}\n\n// DryRunClient implements APIClient by delegating to implementation functions. This allows lazy init and per-method overrides\ntype DryRunClient struct {\n\tapiClient  client.APIClient\n\tcontainers []containerType.Summary\n\texecs      sync.Map\n\tresolver   *imagetools.Resolver\n}\n\ntype execDetails struct {\n\tcontainer string\n\tcommand   []string\n}\n\ntype fakeStreamResult struct {\n\tio.ReadCloser\n\tclient.ImagePushResponse // same interface as [client.ImagePullResponse]\n}\n\nfunc (e fakeStreamResult) Read(p []byte) (int, error) { return e.ReadCloser.Read(p) }\nfunc (e fakeStreamResult) Close() error               { return e.ReadCloser.Close() }\n\n// NewDryRunClient produces a DryRunClient\nfunc NewDryRunClient(apiClient client.APIClient, cli command.Cli) (*DryRunClient, error) {\n\tb, err := builder.New(cli, builder.WithSkippedValidation())\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tconfigFile, err := b.ImageOpt()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn &DryRunClient{\n\t\tapiClient:  apiClient,\n\t\tcontainers: []containerType.Summary{},\n\t\texecs:      sync.Map{},\n\t\tresolver:   imagetools.New(configFile),\n\t}, nil\n}\n\nfunc getCallingFunction() string {\n\tpc, _, _, _ := runtime.Caller(2)\n\tfullName := runtime.FuncForPC(pc).Name()\n\treturn fullName[strings.LastIndex(fullName, \".\")+1:]\n}\n\n// All methods and functions which need to be overridden for dry run.\n\nfunc (d *DryRunClient) ContainerAttach(ctx context.Context, container string, options client.ContainerAttachOptions) (client.ContainerAttachResult, error) {\n\treturn client.ContainerAttachResult{}, errors.New(\"interactive run is not supported in dry-run mode\")\n}\n\nfunc (d *DryRunClient) ContainerCreate(ctx context.Context, options client.ContainerCreateOptions) (client.ContainerCreateResult, error) {\n\td.containers = append(d.containers, containerType.Summary{\n\t\tID:     options.Name,\n\t\tNames:  []string{options.Name},\n\t\tLabels: options.Config.Labels,\n\t\tHostConfig: struct {\n\t\t\tNetworkMode string            `json:\",omitempty\"`\n\t\t\tAnnotations map[string]string `json:\",omitempty\"`\n\t\t}{},\n\t})\n\treturn client.ContainerCreateResult{ID: options.Name}, nil\n}\n\nfunc (d *DryRunClient) ContainerInspect(ctx context.Context, container string, options client.ContainerInspectOptions) (client.ContainerInspectResult, error) {\n\tcontainerJSON, err := d.apiClient.ContainerInspect(ctx, container, options)\n\tif err != nil {\n\t\tid := \"dryRunId\"\n\t\tfor _, c := range d.containers {\n\t\t\tif c.ID == container {\n\t\t\t\tid = container\n\t\t\t}\n\t\t}\n\t\treturn client.ContainerInspectResult{\n\t\t\tContainer: containerType.InspectResponse{\n\t\t\t\tID:   id,\n\t\t\t\tName: container,\n\t\t\t\tState: &containerType.State{\n\t\t\t\t\tStatus: containerType.StateRunning, // needed for --wait option\n\t\t\t\t\tHealth: &containerType.Health{\n\t\t\t\t\t\tStatus: containerType.Healthy, // needed for healthcheck control\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tMounts:          nil,\n\t\t\t\tConfig:          &containerType.Config{},\n\t\t\t\tNetworkSettings: &containerType.NetworkSettings{},\n\t\t\t},\n\t\t}, nil\n\t}\n\treturn containerJSON, err\n}\n\nfunc (d *DryRunClient) ContainerKill(ctx context.Context, container string, options client.ContainerKillOptions) (client.ContainerKillResult, error) {\n\treturn client.ContainerKillResult{}, nil\n}\n\nfunc (d *DryRunClient) ContainerList(ctx context.Context, options client.ContainerListOptions) (client.ContainerListResult, error) {\n\tcaller := getCallingFunction()\n\tswitch caller {\n\tcase \"start\":\n\t\treturn client.ContainerListResult{\n\t\t\tItems: d.containers,\n\t\t}, nil\n\tcase \"getContainers\":\n\t\tif len(d.containers) == 0 {\n\t\t\tres, err := d.apiClient.ContainerList(ctx, options)\n\t\t\tif err == nil {\n\t\t\t\td.containers = res.Items\n\t\t\t}\n\t\t\treturn client.ContainerListResult{\n\t\t\t\tItems: d.containers,\n\t\t\t}, err\n\t\t}\n\t}\n\treturn d.apiClient.ContainerList(ctx, options)\n}\n\nfunc (d *DryRunClient) ContainerPause(ctx context.Context, container string, options client.ContainerPauseOptions) (client.ContainerPauseResult, error) {\n\treturn client.ContainerPauseResult{}, nil\n}\n\nfunc (d *DryRunClient) ContainerRemove(ctx context.Context, container string, options client.ContainerRemoveOptions) (client.ContainerRemoveResult, error) {\n\treturn client.ContainerRemoveResult{}, nil\n}\n\nfunc (d *DryRunClient) ContainerRename(ctx context.Context, container string, options client.ContainerRenameOptions) (client.ContainerRenameResult, error) {\n\treturn client.ContainerRenameResult{}, nil\n}\n\nfunc (d *DryRunClient) ContainerRestart(ctx context.Context, container string, options client.ContainerRestartOptions) (client.ContainerRestartResult, error) {\n\treturn client.ContainerRestartResult{}, nil\n}\n\nfunc (d *DryRunClient) ContainerStart(ctx context.Context, container string, options client.ContainerStartOptions) (client.ContainerStartResult, error) {\n\treturn client.ContainerStartResult{}, nil\n}\n\nfunc (d *DryRunClient) ContainerStop(ctx context.Context, container string, options client.ContainerStopOptions) (client.ContainerStopResult, error) {\n\treturn client.ContainerStopResult{}, nil\n}\n\nfunc (d *DryRunClient) ContainerUnpause(ctx context.Context, container string, options client.ContainerUnpauseOptions) (client.ContainerUnpauseResult, error) {\n\treturn client.ContainerUnpauseResult{}, nil\n}\n\nfunc (d *DryRunClient) CopyFromContainer(ctx context.Context, container string, options client.CopyFromContainerOptions) (client.CopyFromContainerResult, error) {\n\tif _, err := d.ContainerStatPath(ctx, container, client.ContainerStatPathOptions{Path: options.SourcePath}); err != nil {\n\t\treturn client.CopyFromContainerResult{}, fmt.Errorf(\"could not find the file %s in container %s\", options.SourcePath, container)\n\t}\n\treturn client.CopyFromContainerResult{}, nil\n}\n\nfunc (d *DryRunClient) CopyToContainer(ctx context.Context, container string, options client.CopyToContainerOptions) (client.CopyToContainerResult, error) {\n\treturn client.CopyToContainerResult{}, nil\n}\n\nfunc (d *DryRunClient) ImageBuild(ctx context.Context, reader io.Reader, options client.ImageBuildOptions) (client.ImageBuildResult, error) {\n\treturn client.ImageBuildResult{\n\t\tBody: io.NopCloser(bytes.NewReader(nil)),\n\t}, nil\n}\n\nfunc (d *DryRunClient) ImageInspect(ctx context.Context, imageName string, options ...client.ImageInspectOption) (client.ImageInspectResult, error) {\n\tcaller := getCallingFunction()\n\tswitch caller {\n\tcase \"pullServiceImage\", \"buildContainerVolumes\":\n\t\treturn client.ImageInspectResult{\n\t\t\tInspectResponse: image.InspectResponse{ID: \"dryRunId\"},\n\t\t}, nil\n\tdefault:\n\t\treturn d.apiClient.ImageInspect(ctx, imageName, options...)\n\t}\n}\n\nfunc (d *DryRunClient) ImagePull(ctx context.Context, ref string, options client.ImagePullOptions) (client.ImagePullResponse, error) {\n\tif _, _, err := d.resolver.Resolve(ctx, ref); err != nil {\n\t\treturn nil, err\n\t}\n\treturn fakeStreamResult{ReadCloser: http.NoBody}, nil\n}\n\nfunc (d *DryRunClient) ImagePush(ctx context.Context, ref string, options client.ImagePushOptions) (client.ImagePushResponse, error) {\n\tif _, _, err := d.resolver.Resolve(ctx, ref); err != nil {\n\t\treturn nil, err\n\t}\n\tjsonMessage, err := json.Marshal(&jsonstream.Message{\n\t\tStatus: \"Pushed\",\n\t\tProgress: &jsonstream.Progress{\n\t\t\tCurrent:    100,\n\t\t\tTotal:      100,\n\t\t\tStart:      0,\n\t\t\tHideCounts: false,\n\t\t\tUnits:      \"Mb\",\n\t\t},\n\t\tID: ref,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn fakeStreamResult{ReadCloser: io.NopCloser(bytes.NewReader(jsonMessage))}, nil\n}\n\nfunc (d *DryRunClient) ImageRemove(ctx context.Context, imageName string, options client.ImageRemoveOptions) (client.ImageRemoveResult, error) {\n\treturn client.ImageRemoveResult{}, nil\n}\n\nfunc (d *DryRunClient) NetworkConnect(ctx context.Context, networkName string, options client.NetworkConnectOptions) (client.NetworkConnectResult, error) {\n\treturn client.NetworkConnectResult{}, nil\n}\n\nfunc (d *DryRunClient) NetworkCreate(ctx context.Context, name string, options client.NetworkCreateOptions) (client.NetworkCreateResult, error) {\n\treturn client.NetworkCreateResult{\n\t\tID: name,\n\t}, nil\n}\n\nfunc (d *DryRunClient) NetworkDisconnect(ctx context.Context, networkName string, options client.NetworkDisconnectOptions) (client.NetworkDisconnectResult, error) {\n\treturn client.NetworkDisconnectResult{}, nil\n}\n\nfunc (d *DryRunClient) NetworkRemove(ctx context.Context, networkName string, options client.NetworkRemoveOptions) (client.NetworkRemoveResult, error) {\n\treturn client.NetworkRemoveResult{}, nil\n}\n\nfunc (d *DryRunClient) VolumeCreate(ctx context.Context, options client.VolumeCreateOptions) (client.VolumeCreateResult, error) {\n\treturn client.VolumeCreateResult{\n\t\tVolume: volume.Volume{\n\t\t\tClusterVolume: nil,\n\t\t\tDriver:        options.Driver,\n\t\t\tLabels:        options.Labels,\n\t\t\tName:          options.Name,\n\t\t\tOptions:       options.DriverOpts,\n\t\t},\n\t}, nil\n}\n\nfunc (d *DryRunClient) VolumeRemove(ctx context.Context, volumeID string, options client.VolumeRemoveOptions) (client.VolumeRemoveResult, error) {\n\treturn client.VolumeRemoveResult{}, nil\n}\n\nfunc (d *DryRunClient) ExecCreate(ctx context.Context, container string, config client.ExecCreateOptions) (client.ExecCreateResult, error) {\n\tb := make([]byte, 32)\n\t_, _ = rand.Read(b)\n\tid := fmt.Sprintf(\"%x\", b)\n\td.execs.Store(id, execDetails{\n\t\tcontainer: container,\n\t\tcommand:   config.Cmd,\n\t})\n\treturn client.ExecCreateResult{\n\t\tID: id,\n\t}, nil\n}\n\nfunc (d *DryRunClient) ExecStart(ctx context.Context, execID string, config client.ExecStartOptions) (client.ExecStartResult, error) {\n\t_, ok := d.execs.LoadAndDelete(execID)\n\tif !ok {\n\t\treturn client.ExecStartResult{}, fmt.Errorf(\"invalid exec ID %q\", execID)\n\t}\n\treturn client.ExecStartResult{}, nil\n}\n\n// Functions delegated to original APIClient (not used by Compose or not modifying the Compose stack)\n\nfunc (d *DryRunClient) ConfigList(ctx context.Context, options client.ConfigListOptions) (client.ConfigListResult, error) {\n\treturn d.apiClient.ConfigList(ctx, options)\n}\n\nfunc (d *DryRunClient) ConfigInspect(ctx context.Context, name string, options client.ConfigInspectOptions) (client.ConfigInspectResult, error) {\n\treturn d.apiClient.ConfigInspect(ctx, name, options)\n}\n\nfunc (d *DryRunClient) ConfigCreate(ctx context.Context, options client.ConfigCreateOptions) (client.ConfigCreateResult, error) {\n\treturn d.apiClient.ConfigCreate(ctx, options)\n}\n\nfunc (d *DryRunClient) ConfigRemove(ctx context.Context, id string, options client.ConfigRemoveOptions) (client.ConfigRemoveResult, error) {\n\treturn d.apiClient.ConfigRemove(ctx, id, options)\n}\n\nfunc (d *DryRunClient) ConfigUpdate(ctx context.Context, id string, options client.ConfigUpdateOptions) (client.ConfigUpdateResult, error) {\n\treturn d.apiClient.ConfigUpdate(ctx, id, options)\n}\n\nfunc (d *DryRunClient) ContainerCommit(ctx context.Context, container string, options client.ContainerCommitOptions) (client.ContainerCommitResult, error) {\n\treturn d.apiClient.ContainerCommit(ctx, container, options)\n}\n\nfunc (d *DryRunClient) ContainerDiff(ctx context.Context, container string, options client.ContainerDiffOptions) (client.ContainerDiffResult, error) {\n\treturn d.apiClient.ContainerDiff(ctx, container, options)\n}\n\nfunc (d *DryRunClient) ExecAttach(ctx context.Context, execID string, config client.ExecAttachOptions) (client.ExecAttachResult, error) {\n\treturn client.ExecAttachResult{}, errors.New(\"interactive exec is not supported in dry-run mode\")\n}\n\nfunc (d *DryRunClient) ExecInspect(ctx context.Context, execID string, options client.ExecInspectOptions) (client.ExecInspectResult, error) {\n\treturn d.apiClient.ExecInspect(ctx, execID, options)\n}\n\nfunc (d *DryRunClient) ExecResize(ctx context.Context, execID string, options client.ExecResizeOptions) (client.ExecResizeResult, error) {\n\treturn d.apiClient.ExecResize(ctx, execID, options)\n}\n\nfunc (d *DryRunClient) ContainerExport(ctx context.Context, container string, options client.ContainerExportOptions) (client.ContainerExportResult, error) {\n\treturn d.apiClient.ContainerExport(ctx, container, options)\n}\n\nfunc (d *DryRunClient) ContainerLogs(ctx context.Context, container string, options client.ContainerLogsOptions) (client.ContainerLogsResult, error) {\n\treturn d.apiClient.ContainerLogs(ctx, container, options)\n}\n\nfunc (d *DryRunClient) ContainerResize(ctx context.Context, container string, options client.ContainerResizeOptions) (client.ContainerResizeResult, error) {\n\treturn d.apiClient.ContainerResize(ctx, container, options)\n}\n\nfunc (d *DryRunClient) ContainerStatPath(ctx context.Context, container string, options client.ContainerStatPathOptions) (client.ContainerStatPathResult, error) {\n\treturn d.apiClient.ContainerStatPath(ctx, container, options)\n}\n\nfunc (d *DryRunClient) ContainerStats(ctx context.Context, container string, options client.ContainerStatsOptions) (client.ContainerStatsResult, error) {\n\treturn d.apiClient.ContainerStats(ctx, container, options)\n}\n\nfunc (d *DryRunClient) ContainerTop(ctx context.Context, container string, options client.ContainerTopOptions) (client.ContainerTopResult, error) {\n\treturn d.apiClient.ContainerTop(ctx, container, options)\n}\n\nfunc (d *DryRunClient) ContainerUpdate(ctx context.Context, container string, options client.ContainerUpdateOptions) (client.ContainerUpdateResult, error) {\n\treturn d.apiClient.ContainerUpdate(ctx, container, options)\n}\n\nfunc (d *DryRunClient) ContainerWait(ctx context.Context, container string, options client.ContainerWaitOptions) client.ContainerWaitResult {\n\treturn d.apiClient.ContainerWait(ctx, container, options)\n}\n\nfunc (d *DryRunClient) ContainerPrune(ctx context.Context, options client.ContainerPruneOptions) (client.ContainerPruneResult, error) {\n\treturn d.apiClient.ContainerPrune(ctx, options)\n}\n\nfunc (d *DryRunClient) DistributionInspect(ctx context.Context, imageName string, options client.DistributionInspectOptions) (client.DistributionInspectResult, error) {\n\treturn d.apiClient.DistributionInspect(ctx, imageName, options)\n}\n\nfunc (d *DryRunClient) BuildCachePrune(ctx context.Context, opts client.BuildCachePruneOptions) (client.BuildCachePruneResult, error) {\n\treturn d.apiClient.BuildCachePrune(ctx, opts)\n}\n\nfunc (d *DryRunClient) BuildCancel(ctx context.Context, id string, opts client.BuildCancelOptions) (client.BuildCancelResult, error) {\n\treturn d.apiClient.BuildCancel(ctx, id, opts)\n}\n\nfunc (d *DryRunClient) ImageHistory(ctx context.Context, imageName string, options ...client.ImageHistoryOption) (client.ImageHistoryResult, error) {\n\treturn d.apiClient.ImageHistory(ctx, imageName, options...)\n}\n\nfunc (d *DryRunClient) ImageImport(ctx context.Context, source client.ImageImportSource, ref string, options client.ImageImportOptions) (client.ImageImportResult, error) {\n\treturn d.apiClient.ImageImport(ctx, source, ref, options)\n}\n\nfunc (d *DryRunClient) ImageList(ctx context.Context, options client.ImageListOptions) (client.ImageListResult, error) {\n\treturn d.apiClient.ImageList(ctx, options)\n}\n\nfunc (d *DryRunClient) ImageLoad(ctx context.Context, input io.Reader, options ...client.ImageLoadOption) (client.ImageLoadResult, error) {\n\treturn d.apiClient.ImageLoad(ctx, input, options...)\n}\n\nfunc (d *DryRunClient) ImageSearch(ctx context.Context, term string, options client.ImageSearchOptions) (client.ImageSearchResult, error) {\n\treturn d.apiClient.ImageSearch(ctx, term, options)\n}\n\nfunc (d *DryRunClient) ImageSave(ctx context.Context, images []string, options ...client.ImageSaveOption) (client.ImageSaveResult, error) {\n\treturn d.apiClient.ImageSave(ctx, images, options...)\n}\n\nfunc (d *DryRunClient) ImageTag(ctx context.Context, options client.ImageTagOptions) (client.ImageTagResult, error) {\n\treturn d.apiClient.ImageTag(ctx, options)\n}\n\nfunc (d *DryRunClient) ImagePrune(ctx context.Context, options client.ImagePruneOptions) (client.ImagePruneResult, error) {\n\treturn d.apiClient.ImagePrune(ctx, options)\n}\n\nfunc (d *DryRunClient) NodeInspect(ctx context.Context, nodeID string, options client.NodeInspectOptions) (client.NodeInspectResult, error) {\n\treturn d.apiClient.NodeInspect(ctx, nodeID, options)\n}\n\nfunc (d *DryRunClient) NodeList(ctx context.Context, options client.NodeListOptions) (client.NodeListResult, error) {\n\treturn d.apiClient.NodeList(ctx, options)\n}\n\nfunc (d *DryRunClient) NodeRemove(ctx context.Context, nodeID string, options client.NodeRemoveOptions) (client.NodeRemoveResult, error) {\n\treturn d.apiClient.NodeRemove(ctx, nodeID, options)\n}\n\nfunc (d *DryRunClient) NodeUpdate(ctx context.Context, nodeID string, options client.NodeUpdateOptions) (client.NodeUpdateResult, error) {\n\treturn d.apiClient.NodeUpdate(ctx, nodeID, options)\n}\n\nfunc (d *DryRunClient) NetworkInspect(ctx context.Context, networkName string, options client.NetworkInspectOptions) (client.NetworkInspectResult, error) {\n\treturn d.apiClient.NetworkInspect(ctx, networkName, options)\n}\n\nfunc (d *DryRunClient) NetworkList(ctx context.Context, options client.NetworkListOptions) (client.NetworkListResult, error) {\n\treturn d.apiClient.NetworkList(ctx, options)\n}\n\nfunc (d *DryRunClient) NetworkPrune(ctx context.Context, options client.NetworkPruneOptions) (client.NetworkPruneResult, error) {\n\treturn d.apiClient.NetworkPrune(ctx, options)\n}\n\nfunc (d *DryRunClient) PluginList(ctx context.Context, options client.PluginListOptions) (client.PluginListResult, error) {\n\treturn d.apiClient.PluginList(ctx, options)\n}\n\nfunc (d *DryRunClient) PluginRemove(ctx context.Context, name string, options client.PluginRemoveOptions) (client.PluginRemoveResult, error) {\n\treturn d.apiClient.PluginRemove(ctx, name, options)\n}\n\nfunc (d *DryRunClient) PluginEnable(ctx context.Context, name string, options client.PluginEnableOptions) (client.PluginEnableResult, error) {\n\treturn d.apiClient.PluginEnable(ctx, name, options)\n}\n\nfunc (d *DryRunClient) PluginDisable(ctx context.Context, name string, options client.PluginDisableOptions) (client.PluginDisableResult, error) {\n\treturn d.apiClient.PluginDisable(ctx, name, options)\n}\n\nfunc (d *DryRunClient) PluginInstall(ctx context.Context, name string, options client.PluginInstallOptions) (client.PluginInstallResult, error) {\n\treturn d.apiClient.PluginInstall(ctx, name, options)\n}\n\nfunc (d *DryRunClient) PluginUpgrade(ctx context.Context, name string, options client.PluginUpgradeOptions) (client.PluginUpgradeResult, error) {\n\treturn d.apiClient.PluginUpgrade(ctx, name, options)\n}\n\nfunc (d *DryRunClient) PluginPush(ctx context.Context, name string, options client.PluginPushOptions) (client.PluginPushResult, error) {\n\treturn d.apiClient.PluginPush(ctx, name, options)\n}\n\nfunc (d *DryRunClient) PluginSet(ctx context.Context, name string, options client.PluginSetOptions) (client.PluginSetResult, error) {\n\treturn d.apiClient.PluginSet(ctx, name, options)\n}\n\nfunc (d *DryRunClient) PluginInspect(ctx context.Context, name string, options client.PluginInspectOptions) (client.PluginInspectResult, error) {\n\treturn d.apiClient.PluginInspect(ctx, name, options)\n}\n\nfunc (d *DryRunClient) PluginCreate(ctx context.Context, createContext io.Reader, options client.PluginCreateOptions) (client.PluginCreateResult, error) {\n\treturn d.apiClient.PluginCreate(ctx, createContext, options)\n}\n\nfunc (d *DryRunClient) ServiceCreate(ctx context.Context, options client.ServiceCreateOptions) (client.ServiceCreateResult, error) {\n\treturn d.apiClient.ServiceCreate(ctx, options)\n}\n\nfunc (d *DryRunClient) ServiceInspect(ctx context.Context, serviceID string, options client.ServiceInspectOptions) (client.ServiceInspectResult, error) {\n\treturn d.apiClient.ServiceInspect(ctx, serviceID, options)\n}\n\nfunc (d *DryRunClient) ServiceList(ctx context.Context, options client.ServiceListOptions) (client.ServiceListResult, error) {\n\treturn d.apiClient.ServiceList(ctx, options)\n}\n\nfunc (d *DryRunClient) ServiceRemove(ctx context.Context, serviceID string, options client.ServiceRemoveOptions) (client.ServiceRemoveResult, error) {\n\treturn d.apiClient.ServiceRemove(ctx, serviceID, options)\n}\n\nfunc (d *DryRunClient) ServiceUpdate(ctx context.Context, serviceID string, options client.ServiceUpdateOptions) (client.ServiceUpdateResult, error) {\n\treturn d.apiClient.ServiceUpdate(ctx, serviceID, options)\n}\n\nfunc (d *DryRunClient) ServiceLogs(ctx context.Context, serviceID string, options client.ServiceLogsOptions) (client.ServiceLogsResult, error) {\n\treturn d.apiClient.ServiceLogs(ctx, serviceID, options)\n}\n\nfunc (d *DryRunClient) TaskLogs(ctx context.Context, taskID string, options client.TaskLogsOptions) (client.TaskLogsResult, error) {\n\treturn d.apiClient.TaskLogs(ctx, taskID, options)\n}\n\nfunc (d *DryRunClient) TaskInspect(ctx context.Context, taskID string, options client.TaskInspectOptions) (client.TaskInspectResult, error) {\n\treturn d.apiClient.TaskInspect(ctx, taskID, options)\n}\n\nfunc (d *DryRunClient) TaskList(ctx context.Context, options client.TaskListOptions) (client.TaskListResult, error) {\n\treturn d.apiClient.TaskList(ctx, options)\n}\n\nfunc (d *DryRunClient) SwarmInit(ctx context.Context, options client.SwarmInitOptions) (client.SwarmInitResult, error) {\n\treturn d.apiClient.SwarmInit(ctx, options)\n}\n\nfunc (d *DryRunClient) SwarmJoin(ctx context.Context, options client.SwarmJoinOptions) (client.SwarmJoinResult, error) {\n\treturn d.apiClient.SwarmJoin(ctx, options)\n}\n\nfunc (d *DryRunClient) SwarmGetUnlockKey(ctx context.Context) (client.SwarmGetUnlockKeyResult, error) {\n\treturn d.apiClient.SwarmGetUnlockKey(ctx)\n}\n\nfunc (d *DryRunClient) SwarmUnlock(ctx context.Context, options client.SwarmUnlockOptions) (client.SwarmUnlockResult, error) {\n\treturn d.apiClient.SwarmUnlock(ctx, options)\n}\n\nfunc (d *DryRunClient) SwarmLeave(ctx context.Context, options client.SwarmLeaveOptions) (client.SwarmLeaveResult, error) {\n\treturn d.apiClient.SwarmLeave(ctx, options)\n}\n\nfunc (d *DryRunClient) SwarmInspect(ctx context.Context, options client.SwarmInspectOptions) (client.SwarmInspectResult, error) {\n\treturn d.apiClient.SwarmInspect(ctx, options)\n}\n\nfunc (d *DryRunClient) SwarmUpdate(ctx context.Context, options client.SwarmUpdateOptions) (client.SwarmUpdateResult, error) {\n\treturn d.apiClient.SwarmUpdate(ctx, options)\n}\n\nfunc (d *DryRunClient) SecretList(ctx context.Context, options client.SecretListOptions) (client.SecretListResult, error) {\n\treturn d.apiClient.SecretList(ctx, options)\n}\n\nfunc (d *DryRunClient) SecretCreate(ctx context.Context, options client.SecretCreateOptions) (client.SecretCreateResult, error) {\n\treturn d.apiClient.SecretCreate(ctx, options)\n}\n\nfunc (d *DryRunClient) SecretRemove(ctx context.Context, id string, options client.SecretRemoveOptions) (client.SecretRemoveResult, error) {\n\treturn d.apiClient.SecretRemove(ctx, id, options)\n}\n\nfunc (d *DryRunClient) SecretInspect(ctx context.Context, name string, options client.SecretInspectOptions) (client.SecretInspectResult, error) {\n\treturn d.apiClient.SecretInspect(ctx, name, options)\n}\n\nfunc (d *DryRunClient) SecretUpdate(ctx context.Context, id string, options client.SecretUpdateOptions) (client.SecretUpdateResult, error) {\n\treturn d.apiClient.SecretUpdate(ctx, id, options)\n}\n\nfunc (d *DryRunClient) Events(ctx context.Context, options client.EventsListOptions) client.EventsResult {\n\treturn d.apiClient.Events(ctx, options)\n}\n\nfunc (d *DryRunClient) Info(ctx context.Context, options client.InfoOptions) (client.SystemInfoResult, error) {\n\treturn d.apiClient.Info(ctx, options)\n}\n\nfunc (d *DryRunClient) RegistryLogin(ctx context.Context, options client.RegistryLoginOptions) (client.RegistryLoginResult, error) {\n\treturn d.apiClient.RegistryLogin(ctx, options)\n}\n\nfunc (d *DryRunClient) DiskUsage(ctx context.Context, options client.DiskUsageOptions) (client.DiskUsageResult, error) {\n\treturn d.apiClient.DiskUsage(ctx, options)\n}\n\nfunc (d *DryRunClient) Ping(ctx context.Context, options client.PingOptions) (client.PingResult, error) {\n\treturn d.apiClient.Ping(ctx, options)\n}\n\nfunc (d *DryRunClient) VolumeInspect(ctx context.Context, volumeID string, options client.VolumeInspectOptions) (client.VolumeInspectResult, error) {\n\treturn d.apiClient.VolumeInspect(ctx, volumeID, options)\n}\n\nfunc (d *DryRunClient) VolumeList(ctx context.Context, opts client.VolumeListOptions) (client.VolumeListResult, error) {\n\treturn d.apiClient.VolumeList(ctx, opts)\n}\n\nfunc (d *DryRunClient) VolumePrune(ctx context.Context, options client.VolumePruneOptions) (client.VolumePruneResult, error) {\n\treturn d.apiClient.VolumePrune(ctx, options)\n}\n\nfunc (d *DryRunClient) VolumeUpdate(ctx context.Context, volumeID string, options client.VolumeUpdateOptions) (client.VolumeUpdateResult, error) {\n\treturn d.apiClient.VolumeUpdate(ctx, volumeID, options)\n}\n\nfunc (d *DryRunClient) ClientVersion() string {\n\treturn d.apiClient.ClientVersion()\n}\n\nfunc (d *DryRunClient) DaemonHost() string {\n\treturn d.apiClient.DaemonHost()\n}\n\nfunc (d *DryRunClient) ServerVersion(ctx context.Context, options client.ServerVersionOptions) (client.ServerVersionResult, error) {\n\treturn d.apiClient.ServerVersion(ctx, options)\n}\n\nfunc (d *DryRunClient) DialHijack(ctx context.Context, url, proto string, meta map[string][]string) (net.Conn, error) {\n\treturn d.apiClient.DialHijack(ctx, url, proto, meta)\n}\n\nfunc (d *DryRunClient) Dialer() func(context.Context) (net.Conn, error) {\n\treturn d.apiClient.Dialer()\n}\n\nfunc (d *DryRunClient) Close() error {\n\treturn d.apiClient.Close()\n}\n\nfunc (d *DryRunClient) CheckpointCreate(ctx context.Context, container string, options client.CheckpointCreateOptions) (client.CheckpointCreateResult, error) {\n\treturn d.apiClient.CheckpointCreate(ctx, container, options)\n}\n\nfunc (d *DryRunClient) CheckpointRemove(ctx context.Context, container string, options client.CheckpointRemoveOptions) (client.CheckpointRemoveResult, error) {\n\treturn d.apiClient.CheckpointRemove(ctx, container, options)\n}\n\nfunc (d *DryRunClient) CheckpointList(ctx context.Context, container string, options client.CheckpointListOptions) (client.CheckpointListResult, error) {\n\treturn d.apiClient.CheckpointList(ctx, container, options)\n}\n"
  },
  {
    "path": "pkg/e2e/assert.go",
    "content": "/*\n   Copyright 2022 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"encoding/json\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\n// RequireServiceState ensures that the container is in the expected state\n// (running or exited).\nfunc RequireServiceState(t testing.TB, cli *CLI, service string, state string) {\n\tt.Helper()\n\tpsRes := cli.RunDockerComposeCmd(t, \"ps\", \"--all\", \"--format=json\", service)\n\tvar svc map[string]any\n\trequire.NoError(t, json.Unmarshal([]byte(psRes.Stdout()), &svc),\n\t\t\"Invalid `compose ps` JSON: command output: %s\",\n\t\tpsRes.Combined())\n\n\trequire.Equal(t, service, svc[\"Service\"],\n\t\t\"Found ps output for unexpected service\")\n\trequire.Equalf(t,\n\t\tstrings.ToLower(state),\n\t\tstrings.ToLower(svc[\"State\"].(string)),\n\t\t\"Service %q (%s) not in expected state\",\n\t\tservice, svc[\"Name\"],\n\t)\n}\n"
  },
  {
    "path": "pkg/e2e/bridge_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"fmt\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc TestConvertAndTransformList(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tconst projectName = \"bridge\"\n\tconst bridgeImageVersion = \"v0.0.3\"\n\ttmpDir := t.TempDir()\n\n\tt.Run(\"kubernetes manifests\", func(t *testing.T) {\n\t\tkubedir := filepath.Join(tmpDir, \"kubernetes\")\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/bridge/compose.yaml\", \"--project-name\", projectName, \"bridge\", \"convert\",\n\t\t\t\"--output\", kubedir, \"--transformation\", fmt.Sprintf(\"docker/compose-bridge-kubernetes:%s\", bridgeImageVersion))\n\t\tassert.NilError(t, res.Error)\n\t\tassert.Equal(t, res.ExitCode, 0)\n\t\tres = c.RunCmd(t, \"diff\", \"-r\", kubedir, \"./fixtures/bridge/expected-kubernetes\")\n\t\tassert.NilError(t, res.Error, res.Combined())\n\t})\n\n\tt.Run(\"helm charts\", func(t *testing.T) {\n\t\thelmDir := filepath.Join(tmpDir, \"helm\")\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/bridge/compose.yaml\", \"--project-name\", projectName, \"bridge\", \"convert\",\n\t\t\t\"--output\", helmDir, \"--transformation\", fmt.Sprintf(\"docker/compose-bridge-helm:%s\", bridgeImageVersion))\n\t\tassert.NilError(t, res.Error)\n\t\tassert.Equal(t, res.ExitCode, 0)\n\t\tres = c.RunCmd(t, \"diff\", \"-r\", helmDir, \"./fixtures/bridge/expected-helm\")\n\t\tassert.NilError(t, res.Error, res.Combined())\n\t})\n\n\tt.Run(\"list transformers images\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"bridge\", \"transformations\",\n\t\t\t\"ls\")\n\t\tassert.Assert(t, strings.Contains(res.Stdout(), \"docker/compose-bridge-helm\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Stdout(), \"docker/compose-bridge-kubernetes\"), res.Combined())\n\t})\n}\n"
  },
  {
    "path": "pkg/e2e/build_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"os\"\n\t\"regexp\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/require\"\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n\t\"gotest.tools/v3/poll\"\n)\n\nfunc TestLocalComposeBuild(t *testing.T) {\n\tfor _, env := range []string{\"DOCKER_BUILDKIT=0\", \"DOCKER_BUILDKIT=1\"} {\n\t\tc := NewCLI(t, WithEnv(strings.Split(env, \",\")...))\n\n\t\tt.Run(env+\" build named and unnamed images\", func(t *testing.T) {\n\t\t\t// ensure local test run does not reuse previously build image\n\t\t\tc.RunDockerOrExitError(t, \"rmi\", \"-f\", \"build-test-nginx\")\n\t\t\tc.RunDockerOrExitError(t, \"rmi\", \"-f\", \"custom-nginx\")\n\n\t\t\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test\", \"build\")\n\n\t\t\tres.Assert(t, icmd.Expected{Out: \"COPY static /usr/share/nginx/html\"})\n\t\t\tc.RunDockerCmd(t, \"image\", \"inspect\", \"build-test-nginx\")\n\t\t\tc.RunDockerCmd(t, \"image\", \"inspect\", \"custom-nginx\")\n\t\t})\n\n\t\tt.Run(env+\" build with build-arg\", func(t *testing.T) {\n\t\t\t// ensure local test run does not reuse previously build image\n\t\t\tc.RunDockerOrExitError(t, \"rmi\", \"-f\", \"build-test-nginx\")\n\t\t\tc.RunDockerOrExitError(t, \"rmi\", \"-f\", \"custom-nginx\")\n\n\t\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test\", \"build\", \"--build-arg\", \"FOO=BAR\")\n\n\t\t\tres := c.RunDockerCmd(t, \"image\", \"inspect\", \"build-test-nginx\")\n\t\t\tres.Assert(t, icmd.Expected{Out: `\"FOO\": \"BAR\"`})\n\t\t})\n\n\t\tt.Run(env+\" build with build-arg set by env\", func(t *testing.T) {\n\t\t\t// ensure local test run does not reuse previously build image\n\t\t\tc.RunDockerOrExitError(t, \"rmi\", \"-f\", \"build-test-nginx\")\n\t\t\tc.RunDockerOrExitError(t, \"rmi\", \"-f\", \"custom-nginx\")\n\n\t\t\ticmd.RunCmd(c.NewDockerComposeCmd(t,\n\t\t\t\t\"--project-directory\",\n\t\t\t\t\"fixtures/build-test\",\n\t\t\t\t\"build\",\n\t\t\t\t\"--build-arg\",\n\t\t\t\t\"FOO\"),\n\t\t\t\tfunc(cmd *icmd.Cmd) {\n\t\t\t\t\tcmd.Env = append(cmd.Env, \"FOO=BAR\")\n\t\t\t\t}).Assert(t, icmd.Success)\n\n\t\t\tres := c.RunDockerCmd(t, \"image\", \"inspect\", \"build-test-nginx\")\n\t\t\tres.Assert(t, icmd.Expected{Out: `\"FOO\": \"BAR\"`})\n\t\t})\n\n\t\tt.Run(env+\" build with multiple build-args \", func(t *testing.T) {\n\t\t\t// ensure local test run does not reuse previously build image\n\t\t\tc.RunDockerOrExitError(t, \"rmi\", \"-f\", \"multi-args-multiargs\")\n\t\t\tcmd := c.NewDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/multi-args\", \"build\")\n\n\t\t\ticmd.RunCmd(cmd, func(cmd *icmd.Cmd) {\n\t\t\t\tcmd.Env = append(cmd.Env, \"DOCKER_BUILDKIT=0\")\n\t\t\t})\n\n\t\t\tres := c.RunDockerCmd(t, \"image\", \"inspect\", \"multi-args-multiargs\")\n\t\t\tres.Assert(t, icmd.Expected{Out: `\"RESULT\": \"SUCCESS\"`})\n\t\t})\n\n\t\tt.Run(env+\" build as part of up\", func(t *testing.T) {\n\t\t\t// ensure local test run does not reuse previously build image\n\t\t\tc.RunDockerOrExitError(t, \"rmi\", \"-f\", \"build-test-nginx\")\n\t\t\tc.RunDockerOrExitError(t, \"rmi\", \"-f\", \"custom-nginx\")\n\n\t\t\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test\", \"up\", \"-d\")\n\t\t\tt.Cleanup(func() {\n\t\t\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test\", \"down\")\n\t\t\t})\n\n\t\t\tres.Assert(t, icmd.Expected{Out: \"COPY static /usr/share/nginx/html\"})\n\t\t\tres.Assert(t, icmd.Expected{Out: \"COPY static2 /usr/share/nginx/html\"})\n\n\t\t\toutput := HTTPGetWithRetry(t, \"http://localhost:8070\", http.StatusOK, 2*time.Second, 20*time.Second)\n\t\t\tassert.Assert(t, strings.Contains(output, \"Hello from Nginx container\"))\n\n\t\t\tc.RunDockerCmd(t, \"image\", \"inspect\", \"build-test-nginx\")\n\t\t\tc.RunDockerCmd(t, \"image\", \"inspect\", \"custom-nginx\")\n\t\t})\n\n\t\tt.Run(env+\" no rebuild when up again\", func(t *testing.T) {\n\t\t\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test\", \"up\", \"-d\")\n\n\t\t\tassert.Assert(t, !strings.Contains(res.Stdout(), \"COPY static\"))\n\t\t})\n\n\t\tt.Run(env+\" rebuild when up --build\", func(t *testing.T) {\n\t\t\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test\", \"up\", \"-d\", \"--build\")\n\n\t\t\tres.Assert(t, icmd.Expected{Out: \"COPY static /usr/share/nginx/html\"})\n\t\t\tres.Assert(t, icmd.Expected{Out: \"COPY static2 /usr/share/nginx/html\"})\n\t\t})\n\n\t\tt.Run(env+\" build --push ignored for unnamed images\", func(t *testing.T) {\n\t\t\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test\", \"build\", \"--push\", \"nginx\")\n\t\t\tassert.Assert(t, !strings.Contains(res.Stdout(), \"failed to push\"), res.Stdout())\n\t\t})\n\n\t\tt.Run(env+\" build --quiet\", func(t *testing.T) {\n\t\t\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test\", \"build\", \"--quiet\")\n\t\t\tres.Assert(t, icmd.Expected{Out: \"\"})\n\t\t})\n\n\t\tt.Run(env+\" cleanup build project\", func(t *testing.T) {\n\t\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test\", \"down\")\n\t\t\tc.RunDockerOrExitError(t, \"rmi\", \"-f\", \"build-test-nginx\")\n\t\t\tc.RunDockerOrExitError(t, \"rmi\", \"-f\", \"custom-nginx\")\n\t\t})\n\t}\n}\n\nfunc TestBuildSSH(t *testing.T) {\n\tif runtime.GOOS == \"windows\" {\n\t\tt.Skip(\"Running on Windows. Skipping...\")\n\t}\n\tc := NewParallelCLI(t)\n\n\tt.Run(\"build failed with ssh default value\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"--project-directory\", \"fixtures/build-test\", \"build\", \"--ssh\", \"\")\n\t\tres.Assert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      \"invalid empty ssh agent socket: make sure SSH_AUTH_SOCK is set\",\n\t\t})\n\t})\n\n\tt.Run(\"build succeed with ssh from Compose file\", func(t *testing.T) {\n\t\tc.RunDockerOrExitError(t, \"rmi\", \"build-test-ssh\")\n\n\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/ssh\", \"build\")\n\t\tc.RunDockerCmd(t, \"image\", \"inspect\", \"build-test-ssh\")\n\t})\n\n\tt.Run(\"build succeed with ssh from CLI\", func(t *testing.T) {\n\t\tc.RunDockerOrExitError(t, \"rmi\", \"build-test-ssh\")\n\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/ssh/compose-without-ssh.yaml\", \"--project-directory\",\n\t\t\t\"fixtures/build-test/ssh\", \"build\", \"--no-cache\", \"--ssh\", \"fake-ssh=./fixtures/build-test/ssh/fake_rsa\")\n\t\tc.RunDockerCmd(t, \"image\", \"inspect\", \"build-test-ssh\")\n\t})\n\n\t/*\n\t\tFIXME disabled waiting for https://github.com/moby/buildkit/issues/5558\n\t\tt.Run(\"build failed with wrong ssh key id from CLI\", func(t *testing.T) {\n\t\t\tc.RunDockerOrExitError(t, \"rmi\", \"build-test-ssh\")\n\n\t\t\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/build-test/ssh/compose-without-ssh.yaml\",\n\t\t\t\t\"--project-directory\", \"fixtures/build-test/ssh\", \"build\", \"--no-cache\", \"--ssh\",\n\t\t\t\t\"wrong-ssh=./fixtures/build-test/ssh/fake_rsa\")\n\t\t\tres.Assert(t, icmd.Expected{\n\t\t\t\tExitCode: 1,\n\t\t\t\tErr:      \"unset ssh forward key fake-ssh\",\n\t\t\t})\n\t\t})\n\t*/\n\n\tt.Run(\"build succeed as part of up with ssh from Compose file\", func(t *testing.T) {\n\t\tc.RunDockerOrExitError(t, \"rmi\", \"build-test-ssh\")\n\n\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/ssh\", \"up\", \"-d\", \"--build\")\n\t\tt.Cleanup(func() {\n\t\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/ssh\", \"down\")\n\t\t})\n\t\tc.RunDockerCmd(t, \"image\", \"inspect\", \"build-test-ssh\")\n\t})\n}\n\nfunc TestBuildSecrets(t *testing.T) {\n\tif runtime.GOOS == \"windows\" {\n\t\tt.Skip(\"skipping test on windows\")\n\t}\n\tc := NewParallelCLI(t)\n\n\tt.Run(\"build with secrets\", func(t *testing.T) {\n\t\t// ensure local test run does not reuse previously build image\n\t\tc.RunDockerOrExitError(t, \"rmi\", \"build-test-secret\")\n\n\t\tcmd := c.NewDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/secrets\", \"build\")\n\n\t\tres := icmd.RunCmd(cmd, func(cmd *icmd.Cmd) {\n\t\t\tcmd.Env = append(cmd.Env, \"SOME_SECRET=bar\")\n\t\t})\n\n\t\tres.Assert(t, icmd.Success)\n\t})\n}\n\nfunc TestBuildTags(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Run(\"build with tags\", func(t *testing.T) {\n\t\t// ensure local test run does not reuse previously build image\n\t\tc.RunDockerOrExitError(t, \"rmi\", \"build-test-tags\")\n\n\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"./fixtures/build-test/tags\", \"build\", \"--no-cache\")\n\n\t\tres := c.RunDockerCmd(t, \"image\", \"inspect\", \"build-test-tags\")\n\t\texpectedOutput := `\"RepoTags\": [\n            \"docker/build-test-tags:1.0.0\",\n            \"build-test-tags:latest\",\n            \"other-image-name:v1.0.0\"\n        ],\n`\n\t\tres.Assert(t, icmd.Expected{Out: expectedOutput})\n\t})\n}\n\nfunc TestBuildImageDependencies(t *testing.T) {\n\tdoTest := func(t *testing.T, cli *CLI, args ...string) {\n\t\tresetState := func() {\n\t\t\tcli.RunDockerComposeCmd(t, \"down\", \"--rmi=all\", \"-t=0\")\n\t\t\tres := cli.RunDockerOrExitError(t, \"image\", \"rm\", \"build-dependencies-service\")\n\t\t\tif res.Error != nil {\n\t\t\t\trequire.Contains(t, res.Stderr(), `No such image: build-dependencies-service`)\n\t\t\t}\n\t\t}\n\t\tresetState()\n\t\tt.Cleanup(resetState)\n\n\t\t// the image should NOT exist now\n\t\tres := cli.RunDockerOrExitError(t, \"image\", \"inspect\", \"build-dependencies-service\")\n\t\tres.Assert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      \"No such image: build-dependencies-service\",\n\t\t})\n\n\t\tres = cli.RunDockerComposeCmd(t, args...)\n\t\tt.Log(res.Combined())\n\n\t\tres = cli.RunDockerCmd(t,\n\t\t\t\"image\", \"inspect\", \"--format={{ index .RepoTags 0 }}\",\n\t\t\t\"build-dependencies-service\")\n\t\tres.Assert(t, icmd.Expected{Out: \"build-dependencies-service:latest\"})\n\n\t\tres = cli.RunDockerComposeCmd(t, \"down\", \"-t0\", \"--rmi=all\", \"--remove-orphans\")\n\t\tt.Log(res.Combined())\n\n\t\tres = cli.RunDockerOrExitError(t, \"image\", \"inspect\", \"build-dependencies-service\")\n\t\tres.Assert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      \"No such image: build-dependencies-service\",\n\t\t})\n\t}\n\n\tt.Run(\"ClassicBuilder\", func(t *testing.T) {\n\t\tcli := NewCLI(t, WithEnv(\n\t\t\t\"DOCKER_BUILDKIT=0\",\n\t\t\t\"COMPOSE_FILE=./fixtures/build-dependencies/classic.yaml\",\n\t\t))\n\t\tdoTest(t, cli, \"build\")\n\t\tdoTest(t, cli, \"build\", \"--with-dependencies\", \"service\")\n\t})\n\n\tt.Run(\"Bake by additional contexts\", func(t *testing.T) {\n\t\tcli := NewCLI(t, WithEnv(\n\t\t\t\"DOCKER_BUILDKIT=1\", \"COMPOSE_BAKE=1\",\n\t\t\t\"COMPOSE_FILE=./fixtures/build-dependencies/compose.yaml\",\n\t\t))\n\t\tdoTest(t, cli, \"--verbose\", \"build\")\n\t\tdoTest(t, cli, \"--verbose\", \"build\", \"service\")\n\t\tdoTest(t, cli, \"--verbose\", \"up\", \"--build\", \"service\")\n\t})\n}\n\nfunc TestBuildPlatformsWithCorrectBuildxConfig(t *testing.T) {\n\tif runtime.GOOS == \"windows\" {\n\t\tt.Skip(\"Running on Windows. Skipping...\")\n\t}\n\tc := NewParallelCLI(t)\n\n\t// declare builder\n\tresult := c.RunDockerCmd(t, \"buildx\", \"create\", \"--name\", \"build-platform\", \"--use\", \"--bootstrap\")\n\tassert.NilError(t, result.Error)\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/platforms\", \"down\")\n\t\t_ = c.RunDockerCmd(t, \"buildx\", \"rm\", \"-f\", \"build-platform\")\n\t})\n\n\tt.Run(\"platform not supported by builder\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"--project-directory\", \"fixtures/build-test/platforms\",\n\t\t\t\"-f\", \"fixtures/build-test/platforms/compose-unsupported-platform.yml\", \"build\")\n\t\tres.Assert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      \"no match for platform\",\n\t\t})\n\t})\n\n\tt.Run(\"multi-arch build ok\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"--project-directory\", \"fixtures/build-test/platforms\", \"build\")\n\t\tassert.NilError(t, res.Error, res.Stderr())\n\t\tres.Assert(t, icmd.Expected{Out: \"I am building for linux/arm64\"})\n\t\tres.Assert(t, icmd.Expected{Out: \"I am building for linux/amd64\"})\n\t})\n\n\tt.Run(\"multi-arch multi service builds ok\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"--project-directory\", \"fixtures/build-test/platforms\",\n\t\t\t\"-f\", \"fixtures/build-test/platforms/compose-multiple-platform-builds.yaml\", \"build\")\n\t\tassert.NilError(t, res.Error, res.Stderr())\n\t\tres.Assert(t, icmd.Expected{Out: \"I'm Service A and I am building for linux/arm64\"})\n\t\tres.Assert(t, icmd.Expected{Out: \"I'm Service A and I am building for linux/amd64\"})\n\t\tres.Assert(t, icmd.Expected{Out: \"I'm Service B and I am building for linux/arm64\"})\n\t\tres.Assert(t, icmd.Expected{Out: \"I'm Service B and I am building for linux/amd64\"})\n\t\tres.Assert(t, icmd.Expected{Out: \"I'm Service C and I am building for linux/arm64\"})\n\t\tres.Assert(t, icmd.Expected{Out: \"I'm Service C and I am building for linux/amd64\"})\n\t})\n\n\tt.Run(\"multi-arch up --build\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"--project-directory\", \"fixtures/build-test/platforms\", \"up\", \"--build\", \"--menu=false\")\n\t\tassert.NilError(t, res.Error, res.Stderr())\n\t\tres.Assert(t, icmd.Expected{Out: \"platforms-1 exited with code 0\"})\n\t})\n\n\tt.Run(\"use DOCKER_DEFAULT_PLATFORM value when up --build\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/platforms\", \"up\", \"--build\", \"--menu=false\")\n\t\tres := icmd.RunCmd(cmd, func(cmd *icmd.Cmd) {\n\t\t\tcmd.Env = append(cmd.Env, \"DOCKER_DEFAULT_PLATFORM=linux/amd64\")\n\t\t})\n\t\tassert.NilError(t, res.Error, res.Stderr())\n\t\tres.Assert(t, icmd.Expected{Out: \"I am building for linux/amd64\"})\n\t\tassert.Assert(t, !strings.Contains(res.Stdout(), \"I am building for linux/arm64\"))\n\t})\n\n\tt.Run(\"use service platform value when no build platforms defined \", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"--project-directory\", \"fixtures/build-test/platforms\",\n\t\t\t\"-f\", \"fixtures/build-test/platforms/compose-service-platform-and-no-build-platforms.yaml\", \"build\")\n\t\tassert.NilError(t, res.Error, res.Stderr())\n\t\tres.Assert(t, icmd.Expected{Out: \"I am building for linux/386\"})\n\t})\n}\n\nfunc TestBuildPrivileged(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\t// declare builder\n\tresult := c.RunDockerCmd(t, \"buildx\", \"create\", \"--name\", \"build-privileged\", \"--use\", \"--bootstrap\", \"--buildkitd-flags\",\n\t\t`'--allow-insecure-entitlement=security.insecure'`)\n\tassert.NilError(t, result.Error)\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/privileged\", \"down\")\n\t\t_ = c.RunDockerCmd(t, \"buildx\", \"rm\", \"-f\", \"build-privileged\")\n\t})\n\n\tt.Run(\"use build privileged mode to run insecure build command\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/privileged\", \"build\")\n\t\tcapEffRe := regexp.MustCompile(\"CapEff:\\t([0-9a-f]+)\")\n\t\tmatches := capEffRe.FindStringSubmatch(res.Stdout())\n\t\tassert.Equal(t, 2, len(matches), \"Did not match CapEff in output, matches: %v\", matches)\n\n\t\tcapEff, err := strconv.ParseUint(matches[1], 16, 64)\n\t\tassert.NilError(t, err, \"Parsing CapEff: %s\", matches[1])\n\n\t\t// NOTE: can't use constant from x/sys/unix or tests won't compile on macOS/Windows\n\t\t// #define CAP_SYS_ADMIN        21\n\t\t// https://github.com/torvalds/linux/blob/v6.1/include/uapi/linux/capability.h#L278\n\t\tconst capSysAdmin = 0x15\n\t\tif capEff&capSysAdmin != capSysAdmin {\n\t\t\tt.Fatalf(\"CapEff %s is missing CAP_SYS_ADMIN\", matches[1])\n\t\t}\n\t})\n}\n\nfunc TestBuildPlatformsStandardErrors(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Run(\"no platform support with Classic Builder\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/platforms\", \"build\")\n\n\t\tres := icmd.RunCmd(cmd, func(cmd *icmd.Cmd) {\n\t\t\tcmd.Env = append(cmd.Env, \"DOCKER_BUILDKIT=0\")\n\t\t})\n\t\tres.Assert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      \"the classic builder doesn't support multi-arch build, set DOCKER_BUILDKIT=1 to use BuildKit\",\n\t\t})\n\t})\n\n\tt.Run(\"builder does not support multi-arch\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"--project-directory\", \"fixtures/build-test/platforms\", \"build\")\n\t\tres.Assert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      \"Multi-platform build is not supported for the docker driver.\",\n\t\t})\n\t})\n\n\tt.Run(\"service platform not defined in platforms build section\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"--project-directory\", \"fixtures/build-test/platforms\",\n\t\t\t\"-f\", \"fixtures/build-test/platforms/compose-service-platform-not-in-build-platforms.yaml\", \"build\")\n\t\tres.Assert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      `service.build.platforms MUST include service.platform \"linux/riscv64\"`,\n\t\t})\n\t})\n\n\tt.Run(\"DOCKER_DEFAULT_PLATFORM value not defined in platforms build section\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/platforms\", \"build\")\n\t\tres := icmd.RunCmd(cmd, func(cmd *icmd.Cmd) {\n\t\t\tcmd.Env = append(cmd.Env, \"DOCKER_DEFAULT_PLATFORM=windows/amd64\")\n\t\t})\n\t\tres.Assert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      `service \"platforms\" build.platforms does not support value set by DOCKER_DEFAULT_PLATFORM: windows/amd64`,\n\t\t})\n\t})\n\n\tt.Run(\"no privileged support with Classic Builder\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/privileged\", \"build\")\n\n\t\tres := icmd.RunCmd(cmd, func(cmd *icmd.Cmd) {\n\t\t\tcmd.Env = append(cmd.Env, \"DOCKER_BUILDKIT=0\")\n\t\t})\n\t\tres.Assert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      \"the classic builder doesn't support privileged mode, set DOCKER_BUILDKIT=1 to use BuildKit\",\n\t\t})\n\t})\n}\n\nfunc TestBuildBuilder(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tbuilderName := \"build-with-builder\"\n\t// declare builder\n\tresult := c.RunDockerCmd(t, \"buildx\", \"create\", \"--name\", builderName, \"--use\", \"--bootstrap\")\n\tassert.NilError(t, result.Error)\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/\", \"down\")\n\t\t_ = c.RunDockerCmd(t, \"buildx\", \"rm\", \"-f\", builderName)\n\t})\n\n\tt.Run(\"use specific builder to run build command\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"--project-directory\", \"fixtures/build-test\", \"build\", \"--builder\", builderName)\n\t\tassert.NilError(t, res.Error, res.Stderr())\n\t})\n\n\tt.Run(\"error when using specific builder to run build command\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"--project-directory\", \"fixtures/build-test\", \"build\", \"--builder\", \"unknown-builder\")\n\t\tres.Assert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      fmt.Sprintf(`no builder %q found`, \"unknown-builder\"),\n\t\t})\n\t})\n}\n\nfunc TestBuildEntitlements(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\t// declare builder\n\tresult := c.RunDockerCmd(t, \"buildx\", \"create\", \"--name\", \"build-insecure\", \"--use\", \"--bootstrap\", \"--buildkitd-flags\",\n\t\t`'--allow-insecure-entitlement=security.insecure'`)\n\tassert.NilError(t, result.Error)\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/entitlements\", \"down\")\n\t\t_ = c.RunDockerCmd(t, \"buildx\", \"rm\", \"-f\", \"build-insecure\")\n\t})\n\n\tt.Run(\"use build privileged mode to run insecure build command\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/build-test/entitlements\", \"build\")\n\t\tcapEffRe := regexp.MustCompile(\"CapEff:\\t([0-9a-f]+)\")\n\t\tmatches := capEffRe.FindStringSubmatch(res.Stdout())\n\t\tassert.Equal(t, 2, len(matches), \"Did not match CapEff in output, matches: %v\", matches)\n\n\t\tcapEff, err := strconv.ParseUint(matches[1], 16, 64)\n\t\tassert.NilError(t, err, \"Parsing CapEff: %s\", matches[1])\n\n\t\t// NOTE: can't use constant from x/sys/unix or tests won't compile on macOS/Windows\n\t\t// #define CAP_SYS_ADMIN        21\n\t\t// https://github.com/torvalds/linux/blob/v6.1/include/uapi/linux/capability.h#L278\n\t\tconst capSysAdmin = 0x15\n\t\tif capEff&capSysAdmin != capSysAdmin {\n\t\t\tt.Fatalf(\"CapEff %s is missing CAP_SYS_ADMIN\", matches[1])\n\t\t}\n\t})\n}\n\nfunc TestBuildDependsOn(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-dependencies/compose-depends_on.yaml\", \"down\", \"--rmi=local\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-dependencies/compose-depends_on.yaml\", \"--progress=plain\", \"up\", \"test2\")\n\tout := res.Combined()\n\tassert.Check(t, strings.Contains(out, \"test1 Built\"))\n}\n\nfunc TestBuildSubset(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/subset/compose.yaml\", \"down\", \"--rmi=local\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/subset/compose.yaml\", \"build\", \"main\")\n\tout := res.Combined()\n\tassert.Check(t, strings.Contains(out, \"main Built\"))\n}\n\nfunc TestBuildDependentImage(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/dependencies/compose.yaml\", \"down\", \"--rmi=local\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/dependencies/compose.yaml\", \"build\", \"firstbuild\")\n\tout := res.Combined()\n\tassert.Check(t, strings.Contains(out, \"firstbuild Built\"))\n\n\tres = c.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/dependencies/compose.yaml\", \"build\", \"secondbuild\")\n\tout = res.Combined()\n\tassert.Check(t, strings.Contains(out, \"secondbuild Built\"))\n}\n\nfunc TestBuildSubDependencies(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/sub-dependencies/compose.yaml\", \"down\", \"--rmi=local\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/sub-dependencies/compose.yaml\", \"build\", \"main\")\n\tout := res.Combined()\n\tassert.Check(t, strings.Contains(out, \"main Built\"))\n\n\tres = c.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/sub-dependencies/compose.yaml\", \"up\", \"--build\", \"main\")\n\tout = res.Combined()\n\tassert.Check(t, strings.Contains(out, \"main Built\"))\n}\n\nfunc TestBuildLongOutputLine(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/long-output-line/compose.yaml\", \"down\", \"--rmi=local\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/long-output-line/compose.yaml\", \"build\", \"long-line\")\n\tout := res.Combined()\n\tassert.Check(t, strings.Contains(out, \"long-line Built\"))\n\n\tres = c.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/long-output-line/compose.yaml\", \"up\", \"--build\", \"long-line\")\n\tout = res.Combined()\n\tassert.Check(t, strings.Contains(out, \"long-line Built\"))\n}\n\nfunc TestBuildDependentImageWithProfile(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/profiles/compose.yaml\", \"down\", \"--rmi=local\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/build-test/profiles/compose.yaml\", \"build\", \"secret-build-test\")\n\tout := res.Combined()\n\tassert.Check(t, strings.Contains(out, \"secret-build-test Built\"))\n}\n\nfunc TestBuildTLS(t *testing.T) {\n\tt.Helper()\n\n\tc := NewParallelCLI(t)\n\tconst dindBuilder = \"e2e-dind-builder\"\n\ttmp := t.TempDir()\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerCmd(t, \"rm\", \"-f\", dindBuilder)\n\t\tc.RunDockerCmd(t, \"context\", \"rm\", dindBuilder)\n\t})\n\n\tc.RunDockerCmd(t, \"run\", \"--name\", dindBuilder, \"--privileged\", \"-p\", \"2376:2376\", \"-d\", \"docker:dind\")\n\n\tpoll.WaitOn(t, func(_ poll.LogT) poll.Result {\n\t\tres := c.RunDockerCmd(t, \"logs\", dindBuilder)\n\t\tif strings.Contains(res.Combined(), \"API listen on [::]:2376\") {\n\t\t\treturn poll.Success()\n\t\t}\n\t\treturn poll.Continue(\"waiting for Docker daemon to be running\")\n\t}, poll.WithTimeout(10*time.Second))\n\n\ttime.Sleep(1 * time.Second) // wait for dind setup\n\tc.RunDockerCmd(t, \"cp\", dindBuilder+\":/certs/client\", tmp)\n\n\tc.RunDockerCmd(t, \"context\", \"create\", dindBuilder, \"--docker\",\n\t\tfmt.Sprintf(\"host=tcp://localhost:2376,ca=%s/client/ca.pem,cert=%s/client/cert.pem,key=%s/client/key.pem,skip-tls-verify=1\", tmp, tmp, tmp))\n\n\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"fixtures/build-test/minimal/compose.yaml\", \"build\")\n\tcmd.Env = append(cmd.Env, \"DOCKER_CONTEXT=\"+dindBuilder)\n\tcmd.Stdout = os.Stdout\n\tres := icmd.RunCmd(cmd)\n\tres.Assert(t, icmd.Expected{Err: \"Built\"})\n}\n\nfunc TestBuildEscaped(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"./fixtures/build-test/escaped\", \"build\", \"--no-cache\", \"foo\")\n\tres.Assert(t, icmd.Expected{Out: \"foo is ${bar}\"})\n\n\tres = c.RunDockerComposeCmd(t, \"--project-directory\", \"./fixtures/build-test/escaped\", \"build\", \"--no-cache\", \"echo\")\n\tres.Assert(t, icmd.Success)\n\n\tres = c.RunDockerComposeCmd(t, \"--project-directory\", \"./fixtures/build-test/escaped\", \"build\", \"--no-cache\", \"arg\")\n\tres.Assert(t, icmd.Success)\n}\n"
  },
  {
    "path": "pkg/e2e/cancel_test.go",
    "content": "//go:build !windows\n\n/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os/exec\"\n\t\"strings\"\n\t\"syscall\"\n\t\"testing\"\n\t\"time\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\nfunc TestComposeCancel(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Run(\"metrics on cancel Compose build\", func(t *testing.T) {\n\t\tconst buildProjectPath = \"fixtures/build-infinite/compose.yaml\"\n\n\t\tctx, cancel := context.WithCancel(t.Context())\n\t\tdefer cancel()\n\n\t\t// require a separate groupID from the process running tests, in order to simulate ctrl+C from a terminal.\n\t\t// sending kill signal\n\t\tvar stdout, stderr utils.SafeBuffer\n\t\tcmd, err := StartWithNewGroupID(\n\t\t\tctx,\n\t\t\tc.NewDockerComposeCmd(t, \"-f\", buildProjectPath, \"build\", \"--progress\", \"plain\"),\n\t\t\t&stdout,\n\t\t\t&stderr,\n\t\t)\n\t\tassert.NilError(t, err)\n\t\tprocessDone := make(chan error, 1)\n\t\tgo func() {\n\t\t\tdefer close(processDone)\n\t\t\tprocessDone <- cmd.Wait()\n\t\t}()\n\n\t\tc.WaitForCondition(t, func() (bool, string) {\n\t\t\tout := stdout.String()\n\t\t\terrors := stderr.String()\n\t\t\treturn strings.Contains(out,\n\t\t\t\t\t\"RUN sleep infinity\"), fmt.Sprintf(\"'RUN sleep infinity' not found in : \\n%s\\nStderr: \\n%s\\n\", out,\n\t\t\t\t\terrors)\n\t\t}, 30*time.Second, 1*time.Second)\n\n\t\t// simulate Ctrl-C : send signal to processGroup, children will have same groupId by default\n\t\terr = syscall.Kill(-cmd.Process.Pid, syscall.SIGINT)\n\t\tassert.NilError(t, err)\n\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\tt.Fatal(\"test context canceled\")\n\t\tcase err := <-processDone:\n\t\t\t// TODO(milas): Compose should really not return exit code 130 here,\n\t\t\t// \tthis is an old hack for the compose-cli wrapper\n\t\t\tassert.Error(t, err, \"exit status 130\",\n\t\t\t\t\"STDOUT:\\n%s\\nSTDERR:\\n%s\\n\", stdout.String(), stderr.String())\n\t\tcase <-time.After(10 * time.Second):\n\t\t\tt.Fatal(\"timeout waiting for Compose exit\")\n\t\t}\n\t})\n}\n\nfunc StartWithNewGroupID(ctx context.Context, command icmd.Cmd, stdout *utils.SafeBuffer, stderr *utils.SafeBuffer) (*exec.Cmd, error) {\n\tcmd := exec.CommandContext(ctx, command.Command[0], command.Command[1:]...)\n\tcmd.Env = command.Env\n\tcmd.SysProcAttr = &syscall.SysProcAttr{Setpgid: true}\n\tif stdout != nil {\n\t\tcmd.Stdout = stdout\n\t}\n\tif stderr != nil {\n\t\tcmd.Stderr = stderr\n\t}\n\terr := cmd.Start()\n\treturn cmd, err\n}\n"
  },
  {
    "path": "pkg/e2e/cascade_test.go",
    "content": "//go:build !windows\n\n/*\n   Copyright 2022 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc TestCascadeStop(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"compose-e2e-cascade-stop\"\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/cascade/compose.yaml\", \"--project-name\", projectName,\n\t\t\"up\", \"--abort-on-container-exit\")\n\tassert.Assert(t, strings.Contains(res.Combined(), \"exit-1 exited with code 0\"), res.Combined())\n\t// no --exit-code-from, so this is not an error\n\tassert.Equal(t, res.ExitCode, 0)\n}\n\nfunc TestCascadeFail(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"compose-e2e-cascade-fail\"\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"./fixtures/cascade/compose.yaml\", \"--project-name\", projectName,\n\t\t\"up\", \"--abort-on-container-failure\")\n\tassert.Assert(t, strings.Contains(res.Combined(), \"exit-1 exited with code 0\"), res.Combined())\n\tassert.Assert(t, strings.Contains(res.Combined(), \"fail-1 exited with code 111\"), res.Combined())\n\t// failing exit code should be propagated\n\tassert.Equal(t, res.ExitCode, 111)\n}\n"
  },
  {
    "path": "pkg/e2e/commit_test.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"testing\"\n)\n\nfunc TestCommit(t *testing.T) {\n\tconst projectName = \"e2e-commit-service\"\n\tc := NewParallelCLI(t)\n\n\tcleanup := func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--timeout=0\", \"--remove-orphans\")\n\t}\n\tt.Cleanup(cleanup)\n\tcleanup()\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/commit/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\", \"service\")\n\n\tc.RunDockerComposeCmd(\n\t\tt,\n\t\t\"--project-name\",\n\t\tprojectName,\n\t\t\"commit\",\n\t\t\"-a\",\n\t\t\"John Hannibal Smith <hannibal@a-team.com>\",\n\t\t\"-c\",\n\t\t\"ENV DEBUG=true\",\n\t\t\"-m\",\n\t\t\"sample commit\",\n\t\t\"service\",\n\t\t\"service:latest\",\n\t)\n}\n\nfunc TestCommitWithReplicas(t *testing.T) {\n\tconst projectName = \"e2e-commit-service-with-replicas\"\n\tc := NewParallelCLI(t)\n\n\tcleanup := func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--timeout=0\", \"--remove-orphans\")\n\t}\n\tt.Cleanup(cleanup)\n\tcleanup()\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/commit/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\", \"service-with-replicas\")\n\n\tc.RunDockerComposeCmd(\n\t\tt,\n\t\t\"--project-name\",\n\t\tprojectName,\n\t\t\"commit\",\n\t\t\"-a\",\n\t\t\"John Hannibal Smith <hannibal@a-team.com>\",\n\t\t\"-c\",\n\t\t\"ENV DEBUG=true\",\n\t\t\"-m\",\n\t\t\"sample commit\",\n\t\t\"--index=1\",\n\t\t\"service-with-replicas\",\n\t\t\"service-with-replicas:1\",\n\t)\n\tc.RunDockerComposeCmd(\n\t\tt,\n\t\t\"--project-name\",\n\t\tprojectName,\n\t\t\"commit\",\n\t\t\"-a\",\n\t\t\"John Hannibal Smith <hannibal@a-team.com>\",\n\t\t\"-c\",\n\t\t\"ENV DEBUG=true\",\n\t\t\"-m\",\n\t\t\"sample commit\",\n\t\t\"--index=2\",\n\t\t\"service-with-replicas\",\n\t\t\"service-with-replicas:2\",\n\t)\n}\n"
  },
  {
    "path": "pkg/e2e/compose_environment_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestEnvPriority(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Run(\"up\", func(t *testing.T) {\n\t\tc.RunDockerOrExitError(t, \"rmi\", \"env-compose-priority\")\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-priority/compose-with-env.yaml\",\n\t\t\t\"up\", \"-d\", \"--build\")\n\t})\n\n\t// Full options activated\n\t// 1. Command Line (docker compose run --env <KEY[=VAL]>)  <-- Result expected (From OS Environment)\n\t// 2. Compose File (service::environment section)\n\t// 3. Compose File (service::env_file section file)\n\t// 4. Container Image ENV directive\n\t// 5. Variable is not defined\n\tt.Run(\"compose file priority\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-priority/compose-with-env.yaml\",\n\t\t\t\"--env-file\", \"./fixtures/environment/env-priority/.env.override\",\n\t\t\t\"run\", \"--rm\", \"-e\", \"WHEREAMI\", \"env-compose-priority\")\n\t\tcmd.Env = append(cmd.Env, \"WHEREAMI=shell\")\n\t\tres := icmd.RunCmd(cmd)\n\t\tassert.Equal(t, strings.TrimSpace(res.Stdout()), \"shell\")\n\t})\n\n\t// Full options activated\n\t// 1. Command Line (docker compose run --env <KEY[=VAL]>)  <-- Result expected\n\t// 2. Compose File (service::environment section)\n\t// 3. Compose File (service::env_file section file)\n\t// 4. Container Image ENV directive\n\t// 5. Variable is not defined\n\tt.Run(\"compose file priority\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-priority/compose-with-env.yaml\",\n\t\t\t\"--env-file\", \"./fixtures/environment/env-priority/.env.override\",\n\t\t\t\"run\", \"--rm\", \"-e\", \"WHEREAMI=shell\", \"env-compose-priority\")\n\t\tres := icmd.RunCmd(cmd)\n\t\tassert.Equal(t, strings.TrimSpace(res.Stdout()), \"shell\")\n\t})\n\n\t// No Compose file, all other options\n\t// 1. Command Line (docker compose run --env <KEY[=VAL]>)  <-- Result expected (From OS Environment)\n\t// 2. Compose File (service::environment section)\n\t// 3. Compose File (service::env_file section file)\n\t// 4. Container Image ENV directive\n\t// 5. Variable is not defined\n\tt.Run(\"shell priority\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-priority/compose.yaml\",\n\t\t\t\"--env-file\", \"./fixtures/environment/env-priority/.env.override\",\n\t\t\t\"run\", \"--rm\", \"-e\", \"WHEREAMI\", \"env-compose-priority\")\n\t\tcmd.Env = append(cmd.Env, \"WHEREAMI=shell\")\n\t\tres := icmd.RunCmd(cmd)\n\t\tassert.Equal(t, strings.TrimSpace(res.Stdout()), \"shell\")\n\t})\n\n\t// No Compose file, all other options with env variable from OS environment\n\t// 1. Command Line (docker compose run --env <KEY[=VAL]>)  <-- Result expected (From environment)\n\t// 2. Compose File (service::environment section)\n\t// 3. Compose File (service::env_file section file)\n\t// 4. Container Image ENV directive\n\t// 5. Variable is not defined\n\tt.Run(\"shell priority file with default value\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-priority/compose.yaml\",\n\t\t\t\"--env-file\", \"./fixtures/environment/env-priority/.env.override.with.default\",\n\t\t\t\"run\", \"--rm\", \"-e\", \"WHEREAMI\", \"env-compose-priority\")\n\t\tcmd.Env = append(cmd.Env, \"WHEREAMI=shell\")\n\t\tres := icmd.RunCmd(cmd)\n\t\tassert.Equal(t, strings.TrimSpace(res.Stdout()), \"shell\")\n\t})\n\n\t// No Compose file, all other options with env variable from OS environment\n\t// 1. Command Line (docker compose run --env <KEY[=VAL]>)  <-- Result expected (From environment default value from file in --env-file)\n\t// 2. Compose File (service::environment section)\n\t// 3. Compose File (service::env_file section file)\n\t// 4. Container Image ENV directive\n\t// 5. Variable is not defined\n\tt.Run(\"shell priority implicitly set\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-priority/compose.yaml\",\n\t\t\t\"--env-file\", \"./fixtures/environment/env-priority/.env.override.with.default\",\n\t\t\t\"run\", \"--rm\", \"-e\", \"WHEREAMI\", \"env-compose-priority\")\n\t\tres := icmd.RunCmd(cmd)\n\t\tassert.Equal(t, strings.TrimSpace(res.Stdout()), \"EnvFileDefaultValue\")\n\t})\n\n\t// No Compose file, all other options with env variable from OS environment\n\t// 1. Command Line (docker compose run --env <KEY[=VAL]>)  <-- Result expected (From environment default value from file in COMPOSE_ENV_FILES)\n\t// 2. Compose File (service::environment section)\n\t// 3. Compose File (service::env_file section file)\n\t// 4. Container Image ENV directive\n\t// 5. Variable is not defined\n\tt.Run(\"shell priority from COMPOSE_ENV_FILES variable\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-priority/compose.yaml\",\n\t\t\t\"run\", \"--rm\", \"-e\", \"WHEREAMI\", \"env-compose-priority\")\n\t\tcmd.Env = append(cmd.Env, \"COMPOSE_ENV_FILES=./fixtures/environment/env-priority/.env.override.with.default\")\n\t\tres := icmd.RunCmd(cmd)\n\t\tstdout := res.Stdout()\n\t\tassert.Equal(t, strings.TrimSpace(stdout), \"EnvFileDefaultValue\")\n\t})\n\n\t// No Compose file and env variable pass to the run command\n\t// 1. Command Line (docker compose run --env <KEY[=VAL]>)  <-- Result expected\n\t// 2. Compose File (service::environment section)\n\t// 3. Compose File (service::env_file section file)\n\t// 4. Container Image ENV directive\n\t// 5. Variable is not defined\n\tt.Run(\"shell priority from run command\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-priority/compose.yaml\",\n\t\t\t\"--env-file\", \"./fixtures/environment/env-priority/.env.override\",\n\t\t\t\"run\", \"--rm\", \"-e\", \"WHEREAMI=shell-run\", \"env-compose-priority\")\n\t\tassert.Equal(t, strings.TrimSpace(res.Stdout()), \"shell-run\")\n\t})\n\n\t// No Compose file & no env variable but override env file\n\t// 1. Command Line (docker compose run --env <KEY[=VAL]>)  <-- Result expected (From environment patched by .env as a default --env-file value)\n\t// 2. Compose File (service::environment section)\n\t// 3. Compose File (service::env_file section file)\n\t// 4. Container Image ENV directive\n\t// 5. Variable is not defined\n\tt.Run(\"override env file from compose\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-priority/compose-with-env-file.yaml\",\n\t\t\t\"run\", \"--rm\", \"-e\", \"WHEREAMI\", \"env-compose-priority\")\n\t\tassert.Equal(t, strings.TrimSpace(res.Stdout()), \"Env File\")\n\t})\n\n\t// No Compose file & no env variable but override by default env file\n\t// 1. Command Line (docker compose run --env <KEY[=VAL]>)  <-- Result expected (From environment patched by --env-file value)\n\t// 2. Compose File (service::environment section)\n\t// 3. Compose File (service::env_file section file)\n\t// 4. Container Image ENV directive\n\t// 5. Variable is not defined\n\tt.Run(\"override env file\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-priority/compose.yaml\",\n\t\t\t\"--env-file\", \"./fixtures/environment/env-priority/.env.override\",\n\t\t\t\"run\", \"--rm\", \"-e\", \"WHEREAMI\", \"env-compose-priority\")\n\t\tassert.Equal(t, strings.TrimSpace(res.Stdout()), \"override\")\n\t})\n\n\t// No Compose file & no env variable but override env file\n\t// 1. Command Line (docker compose run --env <KEY[=VAL]>)  <-- Result expected (From environment patched by --env-file value)\n\t// 2. Compose File (service::environment section)\n\t// 3. Compose File (service::env_file section file)\n\t// 4. Container Image ENV directive\n\t// 5. Variable is not defined\n\tt.Run(\"env file\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-priority/compose.yaml\",\n\t\t\t\"run\", \"--rm\", \"-e\", \"WHEREAMI\", \"env-compose-priority\")\n\t\tassert.Equal(t, strings.TrimSpace(res.Stdout()), \"Env File\")\n\t})\n\n\t// No Compose file & no env variable, using an empty override env file\n\t// 1. Command Line (docker compose run --env <KEY[=VAL]>)\n\t// 2. Compose File (service::environment section)\n\t// 3. Compose File (service::env_file section file)\n\t// 4. Container Image ENV directive <-- Result expected\n\t// 5. Variable is not defined\n\tt.Run(\"use Dockerfile\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-priority/compose.yaml\",\n\t\t\t\"--env-file\", \"./fixtures/environment/env-priority/.env.empty\",\n\t\t\t\"run\", \"--rm\", \"-e\", \"WHEREAMI\", \"env-compose-priority\")\n\t\tassert.Equal(t, strings.TrimSpace(res.Stdout()), \"Dockerfile\")\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", \"env-priority\", \"down\")\n\t})\n}\n\nfunc TestEnvInterpolation(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Run(\"shell priority from run command\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-interpolation/compose.yaml\", \"config\")\n\t\tcmd.Env = append(cmd.Env, \"WHEREAMI=shell\")\n\t\tres := icmd.RunCmd(cmd)\n\t\tres.Assert(t, icmd.Expected{Out: `IMAGE: default_env:shell`})\n\t})\n\n\tt.Run(\"shell priority from run command using default value fallback\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-interpolation-default-value/compose.yaml\", \"config\").\n\t\t\tAssert(t, icmd.Expected{Out: `IMAGE: default_env:EnvFileDefaultValue`})\n\t})\n}\n\nfunc TestCommentsInEnvFile(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Run(\"comments in env files\", func(t *testing.T) {\n\t\tc.RunDockerOrExitError(t, \"rmi\", \"env-file-comments\")\n\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-file-comments/compose.yaml\", \"up\", \"-d\", \"--build\")\n\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/environment/env-file-comments/compose.yaml\",\n\t\t\t\"run\", \"--rm\", \"-e\", \"COMMENT\", \"-e\", \"NO_COMMENT\", \"env-file-comments\")\n\n\t\tres.Assert(t, icmd.Expected{Out: `COMMENT=1234`})\n\t\tres.Assert(t, icmd.Expected{Out: `NO_COMMENT=1234#5`})\n\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", \"env-file-comments\", \"down\", \"--rmi\", \"all\")\n\t})\n}\n\nfunc TestUnsetEnv(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", \"empty-variable\", \"down\", \"--rmi\", \"all\")\n\t})\n\n\tt.Run(\"override env variable\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/environment/empty-variable/compose.yaml\", \"build\")\n\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/environment/empty-variable/compose.yaml\",\n\t\t\t\"run\", \"-e\", \"EMPTY=hello\", \"--rm\", \"empty-variable\")\n\t\tres.Assert(t, icmd.Expected{Out: `=hello=`})\n\t})\n\n\tt.Run(\"unset env variable\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/environment/empty-variable/compose.yaml\",\n\t\t\t\"run\", \"--rm\", \"empty-variable\")\n\t\tres.Assert(t, icmd.Expected{Out: `==`})\n\t})\n}\n"
  },
  {
    "path": "pkg/e2e/compose_exec_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestLocalComposeExec(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tconst projectName = \"compose-e2e-exec\"\n\n\tcmdArgs := func(cmd string, args ...string) []string {\n\t\tret := []string{\"--project-directory\", \"fixtures/simple-composefile\", \"--project-name\", projectName, cmd}\n\t\tret = append(ret, args...)\n\t\treturn ret\n\t}\n\n\tcleanup := func() {\n\t\tc.RunDockerComposeCmd(t, cmdArgs(\"down\", \"--timeout=0\")...)\n\t}\n\tcleanup()\n\tt.Cleanup(cleanup)\n\n\tc.RunDockerComposeCmd(t, cmdArgs(\"up\", \"-d\")...)\n\n\tt.Run(\"exec true\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, cmdArgs(\"exec\", \"simple\", \"/bin/true\")...)\n\t})\n\n\tt.Run(\"exec false\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, cmdArgs(\"exec\", \"simple\", \"/bin/false\")...)\n\t\tres.Assert(t, icmd.Expected{ExitCode: 1})\n\t})\n\n\tt.Run(\"exec with env set\", func(t *testing.T) {\n\t\tres := icmd.RunCmd(c.NewDockerComposeCmd(t, cmdArgs(\"exec\", \"-e\", \"FOO\", \"simple\", \"/usr/bin/env\")...),\n\t\t\tfunc(cmd *icmd.Cmd) {\n\t\t\t\tcmd.Env = append(cmd.Env, \"FOO=BAR\")\n\t\t\t})\n\t\tres.Assert(t, icmd.Expected{Out: \"FOO=BAR\"})\n\t})\n\n\tt.Run(\"exec without env set\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, cmdArgs(\"exec\", \"-e\", \"FOO\", \"simple\", \"/usr/bin/env\")...)\n\t\tassert.Check(t, !strings.Contains(res.Stdout(), \"FOO=\"), res.Combined())\n\t})\n}\n\nfunc TestLocalComposeExecOneOff(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tconst projectName = \"compose-e2e-exec-one-off\"\n\tdefer c.cleanupWithDown(t, projectName)\n\tcmdArgs := func(cmd string, args ...string) []string {\n\t\tret := []string{\"--project-directory\", \"fixtures/simple-composefile\", \"--project-name\", projectName, cmd}\n\t\tret = append(ret, args...)\n\t\treturn ret\n\t}\n\n\tc.RunDockerComposeCmd(t, cmdArgs(\"run\", \"-d\", \"simple\")...)\n\n\tt.Run(\"exec in one-off container\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, cmdArgs(\"exec\", \"-e\", \"FOO\", \"simple\", \"/usr/bin/env\")...)\n\t\tassert.Check(t, !strings.Contains(res.Stdout(), \"FOO=\"), res.Combined())\n\t})\n\n\tt.Run(\"exec with index\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, cmdArgs(\"exec\", \"--index\", \"1\", \"-e\", \"FOO\", \"simple\", \"/usr/bin/env\")...)\n\t\tres.Assert(t, icmd.Expected{ExitCode: 1, Err: \"service \\\"simple\\\" is not running container #1\"})\n\t})\n\tcmdResult := c.RunDockerCmd(t, \"ps\", \"-q\", \"--filter\", \"label=com.docker.compose.project=compose-e2e-exec-one-off\").Stdout()\n\tcontainerIDs := strings.Split(cmdResult, \"\\n\")\n\t_ = c.RunDockerOrExitError(t, append([]string{\"stop\"}, containerIDs...)...)\n}\n"
  },
  {
    "path": "pkg/e2e/compose_run_build_once_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"crypto/rand\"\n\t\"encoding/hex\"\n\t\"fmt\"\n\t\"regexp\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n)\n\n// TestRunBuildOnce tests that services with pull_policy: build are only built once\n// when using 'docker compose run', even when they are dependencies.\n// This addresses a bug where dependencies were built twice: once in startDependencies\n// and once in ensureImagesExists.\nfunc TestRunBuildOnce(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Run(\"dependency with pull_policy build is built only once\", func(t *testing.T) {\n\t\tprojectName := randomProjectName(\"build-once\")\n\t\t_ = c.RunDockerComposeCmd(t, \"-p\", projectName, \"-f\", \"./fixtures/run-test/build-once.yaml\", \"down\", \"--rmi\", \"local\", \"--remove-orphans\", \"-v\")\n\t\tres := c.RunDockerComposeCmd(t, \"-p\", projectName, \"-f\", \"./fixtures/run-test/build-once.yaml\", \"--verbose\", \"run\", \"--build\", \"--rm\", \"curl\")\n\n\t\toutput := res.Stdout()\n\n\t\tnginxBuilds := countServiceBuilds(output, projectName, \"nginx\")\n\n\t\tassert.Equal(t, nginxBuilds, 1, \"nginx should build once, built %d times\\nOutput:\\n%s\", nginxBuilds, output)\n\t\tassert.Assert(t, strings.Contains(res.Stdout(), \"curl service\"))\n\n\t\tc.RunDockerComposeCmd(t, \"-p\", projectName, \"-f\", \"./fixtures/run-test/build-once.yaml\", \"down\", \"--remove-orphans\")\n\t})\n\n\tt.Run(\"nested dependencies build only once each\", func(t *testing.T) {\n\t\tprojectName := randomProjectName(\"build-nested\")\n\t\t_ = c.RunDockerComposeCmd(t, \"-p\", projectName, \"-f\", \"./fixtures/run-test/build-once-nested.yaml\", \"down\", \"--rmi\", \"local\", \"--remove-orphans\", \"-v\")\n\t\tres := c.RunDockerComposeCmd(t, \"-p\", projectName, \"-f\", \"./fixtures/run-test/build-once-nested.yaml\", \"--verbose\", \"run\", \"--build\", \"--rm\", \"app\")\n\n\t\toutput := res.Stdout()\n\n\t\tdbBuilds := countServiceBuilds(output, projectName, \"db\")\n\t\tapiBuilds := countServiceBuilds(output, projectName, \"api\")\n\t\tappBuilds := countServiceBuilds(output, projectName, \"app\")\n\n\t\tassert.Equal(t, dbBuilds, 1, \"db should build once, built %d times\\nOutput:\\n%s\", dbBuilds, output)\n\t\tassert.Equal(t, apiBuilds, 1, \"api should build once, built %d times\\nOutput:\\n%s\", apiBuilds, output)\n\t\tassert.Equal(t, appBuilds, 1, \"app should build once, built %d times\\nOutput:\\n%s\", appBuilds, output)\n\t\tassert.Assert(t, strings.Contains(output, \"App running\"))\n\n\t\tc.RunDockerComposeCmd(t, \"-p\", projectName, \"-f\", \"./fixtures/run-test/build-once-nested.yaml\", \"down\", \"--rmi\", \"local\", \"--remove-orphans\", \"-v\")\n\t})\n\n\tt.Run(\"service with no dependencies builds once\", func(t *testing.T) {\n\t\tprojectName := randomProjectName(\"build-simple\")\n\t\t_ = c.RunDockerComposeCmd(t, \"-p\", projectName, \"-f\", \"./fixtures/run-test/build-once-no-deps.yaml\", \"down\", \"--rmi\", \"local\", \"--remove-orphans\")\n\t\tres := c.RunDockerComposeCmd(t, \"-p\", projectName, \"-f\", \"./fixtures/run-test/build-once-no-deps.yaml\", \"run\", \"--build\", \"--rm\", \"simple\")\n\n\t\toutput := res.Stdout()\n\n\t\tsimpleBuilds := countServiceBuilds(output, projectName, \"simple\")\n\n\t\tassert.Equal(t, simpleBuilds, 1, \"simple should build once, built %d times\\nOutput:\\n%s\", simpleBuilds, output)\n\t\tassert.Assert(t, strings.Contains(res.Stdout(), \"Simple service\"))\n\n\t\tc.RunDockerComposeCmd(t, \"-p\", projectName, \"-f\", \"./fixtures/run-test/build-once-no-deps.yaml\", \"down\", \"--remove-orphans\")\n\t})\n}\n\n// countServiceBuilds counts how many times a service was built by matching\n// the \"naming to *{projectName}-{serviceName}* done\" pattern in the output\nfunc countServiceBuilds(output, projectName, serviceName string) int {\n\tpattern := regexp.MustCompile(`naming to .*` + regexp.QuoteMeta(projectName) + `-` + regexp.QuoteMeta(serviceName) + `.* done`)\n\treturn len(pattern.FindAllString(output, -1))\n}\n\n// randomProjectName generates a unique project name for parallel test execution\n// Format: prefix-<8 random hex chars> (e.g., \"build-once-3f4a9b2c\")\nfunc randomProjectName(prefix string) string {\n\tb := make([]byte, 4) // 4 bytes = 8 hex chars\n\trand.Read(b)         //nolint:errcheck\n\treturn fmt.Sprintf(\"%s-%s\", prefix, hex.EncodeToString(b))\n}\n"
  },
  {
    "path": "pkg/e2e/compose_run_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestLocalComposeRun(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tdefer c.cleanupWithDown(t, \"run-test\")\n\n\tt.Run(\"compose run\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/compose.yaml\", \"run\", \"back\")\n\t\tlines := Lines(res.Stdout())\n\t\tassert.Equal(t, lines[len(lines)-1], \"Hello there!!\", res.Stdout())\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"orphan\"))\n\t\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/compose.yaml\", \"run\", \"back\", \"echo\",\n\t\t\t\"Hello one more time\")\n\t\tlines = Lines(res.Stdout())\n\t\tassert.Equal(t, lines[len(lines)-1], \"Hello one more time\", res.Stdout())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"orphan\"))\n\t})\n\n\tt.Run(\"check run container exited\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"ps\", \"--all\")\n\t\tlines := Lines(res.Stdout())\n\t\tvar runContainerID string\n\t\tvar truncatedSlug string\n\t\tfor _, line := range lines {\n\t\t\tfields := strings.Fields(line)\n\t\t\tcontainerID := fields[len(fields)-1]\n\t\t\tassert.Assert(t, !strings.HasPrefix(containerID, \"run-test-front\"))\n\t\t\tif strings.HasPrefix(containerID, \"run-test-back\") {\n\t\t\t\t// only the one-off container for back service\n\t\t\t\tassert.Assert(t, strings.HasPrefix(containerID, \"run-test-back-run-\"), containerID)\n\t\t\t\ttruncatedSlug = strings.Replace(containerID, \"run-test-back-run-\", \"\", 1)\n\t\t\t\trunContainerID = containerID\n\t\t\t}\n\t\t\tif strings.HasPrefix(containerID, \"run-test-db-1\") {\n\t\t\t\tassert.Assert(t, strings.Contains(line, \"Up\"), line)\n\t\t\t}\n\t\t}\n\t\tassert.Assert(t, runContainerID != \"\")\n\t\tres = c.RunDockerCmd(t, \"inspect\", runContainerID)\n\t\tres.Assert(t, icmd.Expected{Out: ` \"Status\": \"exited\"`})\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.project\": \"run-test\"`})\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.oneoff\": \"True\",`})\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.slug\": \"` + truncatedSlug})\n\t})\n\n\tt.Run(\"compose run --rm\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/compose.yaml\", \"run\", \"--rm\", \"back\", \"echo\",\n\t\t\t\"Hello again\")\n\t\tlines := Lines(res.Stdout())\n\t\tassert.Equal(t, lines[len(lines)-1], \"Hello again\", res.Stdout())\n\n\t\tres = c.RunDockerCmd(t, \"ps\", \"--all\")\n\t\tassert.Assert(t, strings.Contains(res.Stdout(), \"run-test-back\"), res.Stdout())\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/compose.yaml\", \"down\", \"--remove-orphans\")\n\t\tres := c.RunDockerCmd(t, \"ps\", \"--all\")\n\t\tassert.Assert(t, !strings.Contains(res.Stdout(), \"run-test\"), res.Stdout())\n\t})\n\n\tt.Run(\"compose run --volumes\", func(t *testing.T) {\n\t\twd, err := os.Getwd()\n\t\tassert.NilError(t, err)\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/compose.yaml\", \"run\", \"--volumes\", wd+\":/foo\",\n\t\t\t\"back\", \"/bin/sh\", \"-c\", \"ls /foo\")\n\t\tres.Assert(t, icmd.Expected{Out: \"compose_run_test.go\"})\n\n\t\tres = c.RunDockerCmd(t, \"ps\", \"--all\")\n\t\tassert.Assert(t, strings.Contains(res.Stdout(), \"run-test-back\"), res.Stdout())\n\t})\n\n\tt.Run(\"compose run --publish\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/ports.yaml\", \"run\", \"--publish\", \"8081:80\", \"-d\", \"back\",\n\t\t\t\"/bin/sh\", \"-c\", \"sleep 1\")\n\t\tres := c.RunDockerCmd(t, \"ps\")\n\t\tassert.Assert(t, strings.Contains(res.Stdout(), \"8081->80/tcp\"), res.Stdout())\n\t\tassert.Assert(t, !strings.Contains(res.Stdout(), \"8082->80/tcp\"), res.Stdout())\n\t})\n\n\tt.Run(\"compose run --service-ports\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/ports.yaml\", \"run\", \"--service-ports\", \"-d\", \"back\",\n\t\t\t\"/bin/sh\", \"-c\", \"sleep 1\")\n\t\tres := c.RunDockerCmd(t, \"ps\")\n\t\tassert.Assert(t, strings.Contains(res.Stdout(), \"8082->80/tcp\"), res.Stdout())\n\t})\n\n\tt.Run(\"compose run orphan\", func(t *testing.T) {\n\t\t// Use different compose files to get an orphan container\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/orphan.yaml\", \"run\", \"simple\")\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/compose.yaml\", \"run\", \"back\", \"echo\", \"Hello\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"orphan\"))\n\n\t\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/compose.yaml\", \"run\", \"back\", \"echo\", \"Hello\")\n\t\tres = icmd.RunCmd(cmd, func(cmd *icmd.Cmd) {\n\t\t\tcmd.Env = append(cmd.Env, \"COMPOSE_IGNORE_ORPHANS=True\")\n\t\t})\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"orphan\"))\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/compose.yaml\", \"down\")\n\t\ticmd.RunCmd(cmd, func(c *icmd.Cmd) {\n\t\t\tc.Env = append(c.Env, \"COMPOSE_REMOVE_ORPHANS=True\")\n\t\t})\n\t\tres := c.RunDockerCmd(t, \"ps\", \"--all\")\n\n\t\tassert.Assert(t, !strings.Contains(res.Stdout(), \"run-test\"), res.Stdout())\n\t})\n\n\tt.Run(\"run starts only container and dependencies\", func(t *testing.T) {\n\t\t// ensure that even if another service is up run does not start it: https://github.com/docker/compose/issues/9459\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/deps.yaml\", \"up\", \"service_b\", \"--menu=false\")\n\t\tres.Assert(t, icmd.Success)\n\n\t\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/deps.yaml\", \"run\", \"service_a\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"shared_dep\"), res.Combined())\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"service_b\"), res.Combined())\n\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/deps.yaml\", \"down\", \"--remove-orphans\")\n\t})\n\n\tt.Run(\"run without dependencies\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/deps.yaml\", \"run\", \"--no-deps\", \"service_a\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"shared_dep\"), res.Combined())\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"service_b\"), res.Combined())\n\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/deps.yaml\", \"down\", \"--remove-orphans\")\n\t})\n\n\tt.Run(\"run with not required dependency\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/dependencies/deps-not-required.yaml\", \"run\", \"foo\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"foo\"), res.Combined())\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"bar\"), res.Combined())\n\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/dependencies/deps-not-required.yaml\", \"down\", \"--remove-orphans\")\n\t})\n\n\tt.Run(\"--quiet-pull\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/quiet-pull.yaml\", \"down\", \"--remove-orphans\", \"--rmi\", \"all\")\n\t\tres.Assert(t, icmd.Success)\n\n\t\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/quiet-pull.yaml\", \"run\", \"--quiet-pull\", \"backend\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"Pull complete\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"Pulled\"), res.Combined())\n\t})\n\n\tt.Run(\"COMPOSE_PROGRESS quiet\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/quiet-pull.yaml\", \"down\", \"--remove-orphans\", \"--rmi\", \"all\")\n\t\tres.Assert(t, icmd.Success)\n\n\t\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/quiet-pull.yaml\", \"run\", \"backend\")\n\t\tres = icmd.RunCmd(cmd, func(c *icmd.Cmd) {\n\t\t\tc.Env = append(c.Env, \"COMPOSE_PROGRESS=quiet\")\n\t\t})\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"Pull complete\"), res.Combined())\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"Pulled\"), res.Combined())\n\t})\n\n\tt.Run(\"--pull\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/pull.yaml\", \"down\", \"--remove-orphans\", \"--rmi\", \"all\")\n\t\tres.Assert(t, icmd.Success)\n\n\t\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/pull.yaml\", \"run\", \"--pull\", \"always\", \"backend\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"Image nginx Pulling\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"Image nginx Pulled\"), res.Combined())\n\t})\n\n\tt.Run(\"compose run --env-from-file\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/compose.yaml\", \"run\", \"--env-from-file\", \"./fixtures/run-test/run.env\",\n\t\t\t\"front\", \"env\")\n\t\tres.Assert(t, icmd.Expected{Out: \"FOO=BAR\"})\n\t})\n\n\tt.Run(\"compose run -rm with stop signal\", func(t *testing.T) {\n\t\tprojectName := \"run-test\"\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"-f\", \"./fixtures/ps-test/compose.yaml\", \"run\", \"--rm\", \"-d\", \"nginx\")\n\t\tres.Assert(t, icmd.Success)\n\n\t\tres = c.RunDockerCmd(t, \"ps\", \"--quiet\", \"--filter\", \"name=run-test-nginx\")\n\t\tcontainerID := strings.TrimSpace(res.Stdout())\n\n\t\tres = c.RunDockerCmd(t, \"stop\", containerID)\n\t\tres.Assert(t, icmd.Success)\n\t\tres = c.RunDockerCmd(t, \"ps\", \"--all\", \"--filter\", \"name=run-test-nginx\", \"--format\", \"'{{.Names}}'\")\n\t\tassert.Assert(t, !strings.Contains(res.Stdout(), \"run-test-nginx\"), res.Stdout())\n\t})\n\n\tt.Run(\"compose run --env\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/compose.yaml\", \"run\", \"--env\", \"FOO=BAR\",\n\t\t\t\"front\", \"env\")\n\t\tres.Assert(t, icmd.Expected{Out: \"FOO=BAR\"})\n\t})\n\n\tt.Run(\"compose run --build\", func(t *testing.T) {\n\t\tc.cleanupWithDown(t, \"run-test\", \"--rmi=local\")\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/run-test/compose.yaml\", \"run\", \"build\", \"echo\", \"hello world\")\n\t\tres.Assert(t, icmd.Expected{Out: \"hello world\"})\n\t})\n\n\tt.Run(\"compose run with piped input detection\", func(t *testing.T) {\n\t\tif composeStandaloneMode {\n\t\t\tt.Skip(\"Skipping test compose with piped input detection in standalone mode\")\n\t\t}\n\t\t// Test that piped input is properly detected and TTY is automatically disabled\n\t\t// This tests the logic added in run.go that checks dockerCli.In().IsTerminal()\n\t\tcmd := c.NewCmd(\"sh\", \"-c\", \"echo 'piped-content' | docker compose -f ./fixtures/run-test/piped-test.yaml run --rm piped-test\")\n\t\tres := icmd.RunCmd(cmd)\n\n\t\tres.Assert(t, icmd.Expected{Out: \"piped-content\"})\n\t\tres.Assert(t, icmd.Success)\n\t})\n\n\tt.Run(\"compose run piped input should not allocate TTY\", func(t *testing.T) {\n\t\tif composeStandaloneMode {\n\t\t\tt.Skip(\"Skipping test compose with piped input detection in standalone mode\")\n\t\t}\n\t\t// Test that when stdin is piped, the container correctly detects no TTY\n\t\t// This verifies that the automatic noTty=true setting works correctly\n\t\tcmd := c.NewCmd(\"sh\", \"-c\", \"echo '' | docker compose -f ./fixtures/run-test/piped-test.yaml run --rm tty-test\")\n\t\tres := icmd.RunCmd(cmd)\n\n\t\tres.Assert(t, icmd.Expected{Out: \"No TTY detected\"})\n\t\tres.Assert(t, icmd.Success)\n\t})\n\n\tt.Run(\"compose run piped input with explicit --tty should fail\", func(t *testing.T) {\n\t\tif composeStandaloneMode {\n\t\t\tt.Skip(\"Skipping test compose with piped input detection in standalone mode\")\n\t\t}\n\t\t// Test that explicitly requesting TTY with piped input fails with proper error message\n\t\t// This should trigger the \"input device is not a TTY\" error\n\t\tcmd := c.NewCmd(\"sh\", \"-c\", \"echo 'test' | docker compose -f ./fixtures/run-test/piped-test.yaml run --rm --tty piped-test\")\n\t\tres := icmd.RunCmd(cmd)\n\n\t\tres.Assert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      \"the input device is not a TTY\",\n\t\t})\n\t})\n\n\tt.Run(\"compose run piped input with --no-TTY=false should fail\", func(t *testing.T) {\n\t\tif composeStandaloneMode {\n\t\t\tt.Skip(\"Skipping test compose with piped input detection in standalone mode\")\n\t\t}\n\t\t// Test that explicitly disabling --no-TTY (i.e., requesting TTY) with piped input fails\n\t\t// This should also trigger the \"input device is not a TTY\" error\n\t\tcmd := c.NewCmd(\"sh\", \"-c\", \"echo 'test' | docker compose -f ./fixtures/run-test/piped-test.yaml run --rm --no-TTY=false piped-test\")\n\t\tres := icmd.RunCmd(cmd)\n\n\t\tres.Assert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      \"the input device is not a TTY\",\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "pkg/e2e/compose_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\ttestify \"github.com/stretchr/testify/assert\"\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestLocalComposeUp(t *testing.T) {\n\t// this test shares a fixture with TestCompatibility and can't run at the same time\n\tc := NewCLI(t)\n\n\tconst projectName = \"compose-e2e-demo\"\n\n\tt.Run(\"up\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/sentences/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\t})\n\n\tt.Run(\"check accessing running app\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\")\n\t\tres.Assert(t, icmd.Expected{Out: `web`})\n\n\t\tendpoint := \"http://localhost:90\"\n\t\toutput := HTTPGetWithRetry(t, endpoint+\"/words/noun\", http.StatusOK, 2*time.Second, 20*time.Second)\n\t\tassert.Assert(t, strings.Contains(output, `\"word\":`))\n\n\t\tres = c.RunDockerCmd(t, \"network\", \"ls\")\n\t\tres.Assert(t, icmd.Expected{Out: projectName + \"_default\"})\n\t})\n\n\tt.Run(\"top\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-p\", projectName, \"top\")\n\t\toutput := res.Stdout()\n\t\thead := []string{\"UID\", \"PID\", \"PPID\", \"C\", \"STIME\", \"TTY\", \"TIME\", \"CMD\"}\n\t\tfor _, h := range head {\n\t\t\tassert.Assert(t, strings.Contains(output, h), output)\n\t\t}\n\t\tassert.Assert(t, strings.Contains(output, `java -Xmx8m -Xms8m -jar /app/words.jar`), output)\n\t\tassert.Assert(t, strings.Contains(output, `/dispatcher`), output)\n\t})\n\n\tt.Run(\"check compose labels\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"inspect\", projectName+\"-web-1\")\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.container-number\": \"1\"`})\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.project\": \"compose-e2e-demo\"`})\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.oneoff\": \"False\",`})\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.config-hash\":`})\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.project.config_files\":`})\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.project.working_dir\":`})\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.service\": \"web\"`})\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.version\":`})\n\n\t\tres = c.RunDockerCmd(t, \"network\", \"inspect\", projectName+\"_default\")\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.network\": \"default\"`})\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.project\": `})\n\t\tres.Assert(t, icmd.Expected{Out: `\"com.docker.compose.version\": `})\n\t})\n\n\tt.Run(\"check user labels\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"inspect\", projectName+\"-web-1\")\n\t\tres.Assert(t, icmd.Expected{Out: `\"my-label\": \"test\"`})\n\t})\n\n\tt.Run(\"check healthcheck output\", func(t *testing.T) {\n\t\tc.WaitForCmdResult(t, c.NewDockerComposeCmd(t, \"-p\", projectName, \"ps\", \"--format\", \"json\"),\n\t\t\tIsHealthy(projectName+\"-web-1\"),\n\t\t\t5*time.Second, 1*time.Second)\n\n\t\tres := c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\")\n\t\tassertServiceStatus(t, projectName, \"web\", \"(healthy)\", res.Stdout())\n\t})\n\n\tt.Run(\"images\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-p\", projectName, \"images\")\n\t\tres.Assert(t, icmd.Expected{Out: `compose-e2e-demo-db-1      gtardif/sentences-db    latest`})\n\t\tres.Assert(t, icmd.Expected{Out: `compose-e2e-demo-web-1     gtardif/sentences-web   latest`})\n\t\tres.Assert(t, icmd.Expected{Out: `compose-e2e-demo-words-1   gtardif/sentences-api   latest`})\n\t})\n\n\tt.Run(\"down SERVICE\", func(t *testing.T) {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"web\")\n\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"ps\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"compose-e2e-demo-web-1\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"compose-e2e-demo-db-1\"), res.Combined())\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tt.Run(\"check containers after down\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"ps\", \"--all\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), projectName), res.Combined())\n\t})\n\n\tt.Run(\"check networks after down\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"network\", \"ls\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), projectName), res.Combined())\n\t})\n}\n\nfunc TestDownComposefileInParentFolder(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\ttmpFolder, err := os.MkdirTemp(\"fixtures/simple-composefile\", \"test-tmp\")\n\tassert.NilError(t, err)\n\tdefer os.Remove(tmpFolder) //nolint:errcheck\n\tprojectName := filepath.Base(tmpFolder)\n\n\tres := c.RunDockerComposeCmd(t, \"--project-directory\", tmpFolder, \"up\", \"-d\")\n\tres.Assert(t, icmd.Expected{Err: \"Started\", ExitCode: 0})\n\n\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"down\")\n\tres.Assert(t, icmd.Expected{Err: \"Removed\", ExitCode: 0})\n}\n\nfunc TestAttachRestart(t *testing.T) {\n\tt.Skip(\"Skipping test until we can fix it\")\n\n\tif _, ok := os.LookupEnv(\"CI\"); ok {\n\t\tt.Skip(\"Skipping test on CI... flaky\")\n\t}\n\tc := NewParallelCLI(t)\n\n\tcmd := c.NewDockerComposeCmd(t, \"--ansi=never\", \"--project-directory\", \"./fixtures/attach-restart\", \"up\")\n\tres := icmd.StartCmd(cmd)\n\tdefer c.RunDockerComposeCmd(t, \"-p\", \"attach-restart\", \"down\")\n\n\tc.WaitForCondition(t, func() (bool, string) {\n\t\tdebug := res.Combined()\n\t\treturn strings.Count(res.Stdout(),\n\t\t\t\t\"failing-1 exited with code 1\") == 3, fmt.Sprintf(\"'failing-1 exited with code 1' not found 3 times in : \\n%s\\n\",\n\t\t\t\tdebug)\n\t}, 4*time.Minute, 2*time.Second)\n\n\tassert.Equal(t, strings.Count(res.Stdout(), \"failing-1  | world\"), 3, res.Combined())\n}\n\nfunc TestInitContainer(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tres := c.RunDockerComposeCmd(t, \"--ansi=never\", \"--project-directory\", \"./fixtures/init-container\", \"up\", \"--menu=false\")\n\tdefer c.RunDockerComposeCmd(t, \"-p\", \"init-container\", \"down\")\n\ttestify.Regexp(t, \"foo-1  | hello(?m:.*)bar-1  | world\", res.Stdout())\n}\n\nfunc TestRm(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tconst projectName = \"compose-e2e-rm\"\n\n\tt.Run(\"up\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/simple-composefile/compose.yaml\", \"-p\", projectName, \"up\", \"-d\")\n\t})\n\n\tt.Run(\"rm --stop --force simple\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/simple-composefile/compose.yaml\", \"-p\", projectName, \"rm\",\n\t\t\t\"--stop\", \"--force\", \"simple\")\n\t\tres.Assert(t, icmd.Expected{Err: \"Removed\", ExitCode: 0})\n\t})\n\n\tt.Run(\"check containers after rm\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"ps\", \"--all\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), projectName+\"-simple\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), projectName+\"-another\"), res.Combined())\n\t})\n\n\tt.Run(\"up (again)\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/simple-composefile/compose.yaml\", \"-p\", projectName, \"up\", \"-d\")\n\t})\n\n\tt.Run(\"rm ---stop --force <none>\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/simple-composefile/compose.yaml\", \"-p\", projectName, \"rm\",\n\t\t\t\"--stop\", \"--force\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t})\n\n\tt.Run(\"check containers after rm\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"ps\", \"--all\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), projectName+\"-simple\"), res.Combined())\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), projectName+\"-another\"), res.Combined())\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-p\", projectName, \"down\")\n\t})\n}\n\nfunc TestCompatibility(t *testing.T) {\n\t// this test shares a fixture with TestLocalComposeUp and can't run at the same time\n\tc := NewCLI(t)\n\n\tconst projectName = \"compose-e2e-compatibility\"\n\n\tt.Run(\"up\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"--compatibility\", \"-f\", \"./fixtures/sentences/compose.yaml\", \"--project-name\",\n\t\t\tprojectName, \"up\", \"-d\")\n\t})\n\n\tt.Run(\"check container names\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"ps\", \"--format\", \"{{.Names}}\")\n\t\tres.Assert(t, icmd.Expected{Out: \"compose-e2e-compatibility_web_1\"})\n\t\tres.Assert(t, icmd.Expected{Out: \"compose-e2e-compatibility_words_1\"})\n\t\tres.Assert(t, icmd.Expected{Out: \"compose-e2e-compatibility_db_1\"})\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-p\", projectName, \"down\")\n\t})\n}\n\nfunc TestConfig(t *testing.T) {\n\tconst projectName = \"compose-e2e-config\"\n\tc := NewParallelCLI(t)\n\n\twd, err := os.Getwd()\n\tassert.NilError(t, err)\n\n\tt.Run(\"up\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/simple-build-test/compose.yaml\", \"-p\", projectName, \"config\")\n\t\tres.Assert(t, icmd.Expected{Out: fmt.Sprintf(`name: %s\nservices:\n  nginx:\n    build:\n      context: %s\n      dockerfile: Dockerfile\n    networks:\n      default: null\nnetworks:\n  default:\n    name: compose-e2e-config_default\n`, projectName, filepath.Join(wd, \"fixtures\", \"simple-build-test\", \"nginx-build\")), ExitCode: 0})\n\t})\n}\n\nfunc TestConfigInterpolate(t *testing.T) {\n\tconst projectName = \"compose-e2e-config-interpolate\"\n\tc := NewParallelCLI(t)\n\n\twd, err := os.Getwd()\n\tassert.NilError(t, err)\n\n\tt.Run(\"config\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/simple-build-test/compose-interpolate.yaml\", \"-p\", projectName, \"config\", \"--no-interpolate\")\n\t\tres.Assert(t, icmd.Expected{Out: fmt.Sprintf(`name: %s\nnetworks:\n  default:\n    name: compose-e2e-config-interpolate_default\nservices:\n  nginx:\n    build:\n      context: %s\n      dockerfile: ${MYVAR}\n    networks:\n      default: null\n`, projectName, filepath.Join(wd, \"fixtures\", \"simple-build-test\", \"nginx-build\")), ExitCode: 0})\n\t})\n}\n\nfunc TestStopWithDependenciesAttached(t *testing.T) {\n\tconst projectName = \"compose-e2e-stop-with-deps\"\n\tc := NewParallelCLI(t, WithEnv(\"COMMAND=echo hello\"))\n\n\tcleanup := func() {\n\t\tc.RunDockerComposeCmd(t, \"-p\", projectName, \"down\", \"--remove-orphans\", \"--timeout=0\")\n\t}\n\tcleanup()\n\tt.Cleanup(cleanup)\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/dependencies/compose.yaml\", \"-p\", projectName, \"up\", \"--attach-dependencies\", \"foo\", \"--menu=false\")\n\tres.Assert(t, icmd.Expected{Out: \"exited with code 0\"})\n}\n\nfunc TestRemoveOrphaned(t *testing.T) {\n\tconst projectName = \"compose-e2e-remove-orphaned\"\n\tc := NewParallelCLI(t)\n\n\tcleanup := func() {\n\t\tc.RunDockerComposeCmd(t, \"-p\", projectName, \"down\", \"--remove-orphans\", \"--timeout=0\")\n\t}\n\tcleanup()\n\tt.Cleanup(cleanup)\n\n\t// run stack\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/sentences/compose.yaml\", \"-p\", projectName, \"up\", \"-d\")\n\n\t// down \"web\" service with orphaned removed\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/sentences/compose.yaml\", \"-p\", projectName, \"down\", \"--remove-orphans\", \"web\")\n\n\t// check \"words\" service has not been considered orphaned\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/sentences/compose.yaml\", \"-p\", projectName, \"ps\", \"--format\", \"{{.Name}}\")\n\tres.Assert(t, icmd.Expected{Out: fmt.Sprintf(\"%s-words-1\", projectName)})\n}\n\nfunc TestComposeFileSetByDotEnv(t *testing.T) {\n\tc := NewCLI(t)\n\tdefer c.cleanupWithDown(t, \"dotenv\")\n\n\tcmd := c.NewDockerComposeCmd(t, \"config\")\n\tcmd.Dir = filepath.Join(\".\", \"fixtures\", \"dotenv\")\n\tres := icmd.RunCmd(cmd)\n\tres.Assert(t, icmd.Expected{\n\t\tExitCode: 0,\n\t\tOut:      \"image: test:latest\",\n\t})\n\tres.Assert(t, icmd.Expected{\n\t\tOut: \"image: enabled:profile\",\n\t})\n}\n\nfunc TestComposeFileSetByProjectDirectory(t *testing.T) {\n\tc := NewCLI(t)\n\tdefer c.cleanupWithDown(t, \"dotenv\")\n\n\tdir := filepath.Join(\".\", \"fixtures\", \"dotenv\", \"development\")\n\tcmd := c.NewDockerComposeCmd(t, \"--project-directory\", dir, \"config\")\n\tres := icmd.RunCmd(cmd)\n\tres.Assert(t, icmd.Expected{\n\t\tExitCode: 0,\n\t\tOut:      \"image: backend:latest\",\n\t})\n}\n\nfunc TestComposeFileSetByEnvFile(t *testing.T) {\n\tc := NewCLI(t)\n\tdefer c.cleanupWithDown(t, \"dotenv\")\n\n\tdotEnv, err := os.CreateTemp(t.TempDir(), \".env\")\n\tassert.NilError(t, err)\n\terr = os.WriteFile(dotEnv.Name(), []byte(`\nCOMPOSE_FILE=fixtures/dotenv/development/compose.yaml\nIMAGE_NAME=test\nIMAGE_TAG=latest\nCOMPOSE_PROFILES=test\n`), 0o700)\n\tassert.NilError(t, err)\n\n\tcmd := c.NewDockerComposeCmd(t, \"--env-file\", dotEnv.Name(), \"config\")\n\tres := icmd.RunCmd(cmd)\n\tres.Assert(t, icmd.Expected{\n\t\tOut: \"image: test:latest\",\n\t})\n\tres.Assert(t, icmd.Expected{\n\t\tOut: \"image: enabled:profile\",\n\t})\n}\n\nfunc TestNestedDotEnv(t *testing.T) {\n\tc := NewCLI(t)\n\tdefer c.cleanupWithDown(t, \"nested\")\n\n\tcmd := c.NewDockerComposeCmd(t, \"run\", \"echo\")\n\tcmd.Dir = filepath.Join(\".\", \"fixtures\", \"nested\")\n\tres := icmd.RunCmd(cmd)\n\tres.Assert(t, icmd.Expected{\n\t\tExitCode: 0,\n\t\tOut:      \"root win=root\",\n\t})\n\n\tcmd = c.NewDockerComposeCmd(t, \"run\", \"echo\")\n\tcmd.Dir = filepath.Join(\".\", \"fixtures\", \"nested\", \"sub\")\n\tdefer c.cleanupWithDown(t, \"nested\")\n\tres = icmd.RunCmd(cmd)\n\tres.Assert(t, icmd.Expected{\n\t\tExitCode: 0,\n\t\tOut:      \"root sub win=sub\",\n\t})\n}\n\nfunc TestUnnecessaryResources(t *testing.T) {\n\tconst projectName = \"compose-e2e-unnecessary-resources\"\n\tc := NewParallelCLI(t)\n\tdefer c.cleanupWithDown(t, projectName)\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"./fixtures/external/compose.yaml\", \"-p\", projectName, \"up\", \"-d\")\n\tres.Assert(t, icmd.Expected{\n\t\tExitCode: 1,\n\t\tErr:      \"network foo_bar declared as external, but could not be found\",\n\t})\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/external/compose.yaml\", \"-p\", projectName, \"up\", \"-d\", \"test\")\n\t// Should not fail as missing external network is not used\n}\n"
  },
  {
    "path": "pkg/e2e/compose_up_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestUpWait(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-deps-wait\"\n\n\ttimeout := time.After(30 * time.Second)\n\tdone := make(chan bool)\n\tgo func() {\n\t\t//nolint:nolintlint,testifylint // helper asserts inside goroutine; acceptable in this e2e test\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/dependencies/deps-completed-successfully.yaml\", \"--project-name\", projectName, \"up\", \"--wait\", \"-d\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"e2e-deps-wait-oneshot-1\"), res.Combined())\n\t\tdone <- true\n\t}()\n\n\tselect {\n\tcase <-timeout:\n\t\tt.Fatal(\"test did not finish in time\")\n\tcase <-done:\n\t\tbreak\n\t}\n\n\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n}\n\nfunc TestUpExitCodeFrom(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-exit-code-from\"\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/start-fail/start-depends_on-long-lived.yaml\", \"--project-name\", projectName, \"up\", \"--menu=false\", \"--exit-code-from=failure\", \"failure\")\n\tres.Assert(t, icmd.Expected{ExitCode: 42})\n\n\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--remove-orphans\")\n}\n\nfunc TestUpExitCodeFromContainerKilled(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-exit-code-from-kill\"\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/start-fail/start-depends_on-long-lived.yaml\", \"--project-name\", projectName, \"up\", \"--menu=false\", \"--exit-code-from=test\")\n\tres.Assert(t, icmd.Expected{ExitCode: 143})\n\n\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--remove-orphans\")\n}\n\nfunc TestPortRange(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-port-range\"\n\n\treset := func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--remove-orphans\", \"--timeout=0\")\n\t}\n\treset()\n\tt.Cleanup(reset)\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/port-range/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tres.Assert(t, icmd.Success)\n}\n\nfunc TestStdoutStderr(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-stdout-stderr\"\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/stdout-stderr/compose.yaml\", \"--project-name\", projectName, \"up\", \"--menu=false\")\n\tres.Assert(t, icmd.Expected{Out: \"log to stdout\", Err: \"log to stderr\"})\n\n\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--remove-orphans\")\n}\n\nfunc TestLoggingDriver(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"e2e-logging-driver\"\n\tdefer c.cleanupWithDown(t, projectName)\n\n\thost := \"HOST=127.0.0.1\"\n\tres := c.RunDockerCmd(t, \"info\", \"-f\", \"{{.OperatingSystem}}\")\n\tos := res.Stdout()\n\tif strings.TrimSpace(os) == \"Docker Desktop\" {\n\t\thost = \"HOST=host.docker.internal\"\n\t}\n\n\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"fixtures/logging-driver/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tcmd.Env = append(cmd.Env, host, \"BAR=foo\")\n\ticmd.RunCmd(cmd).Assert(t, icmd.Success)\n\n\tcmd = c.NewDockerComposeCmd(t, \"-f\", \"fixtures/logging-driver/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tcmd.Env = append(cmd.Env, host, \"BAR=zot\")\n\ticmd.RunCmd(cmd).Assert(t, icmd.Success)\n}\n"
  },
  {
    "path": "pkg/e2e/config_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"testing\"\n\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestLocalComposeConfig(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tconst projectName = \"compose-e2e-config\"\n\n\tt.Run(\"yaml\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/config/compose.yaml\", \"--project-name\", projectName, \"config\")\n\t\tres.Assert(t, icmd.Expected{Out: `\n    ports:\n      - mode: ingress\n        target: 80\n        published: \"8080\"\n        protocol: tcp`})\n\t})\n\n\tt.Run(\"json\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/config/compose.yaml\", \"--project-name\", projectName, \"config\", \"--format\", \"json\")\n\t\tres.Assert(t, icmd.Expected{Out: `\"published\": \"8080\"`})\n\t})\n\n\tt.Run(\"--no-interpolate\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/config/compose.yaml\", \"--project-name\", projectName, \"config\", \"--no-interpolate\")\n\t\tres.Assert(t, icmd.Expected{Out: `- ${PORT:-8080}:80`})\n\t})\n\n\tt.Run(\"--variables --format json\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/config/compose.yaml\", \"--project-name\", projectName, \"config\", \"--variables\", \"--format\", \"json\")\n\t\tres.Assert(t, icmd.Expected{Out: `{\n    \"PORT\": {\n        \"Name\": \"PORT\",\n        \"DefaultValue\": \"8080\",\n        \"PresenceValue\": \"\",\n        \"Required\": false\n    }\n}`})\n\t})\n\n\tt.Run(\"--variables --format yaml\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/config/compose.yaml\", \"--project-name\", projectName, \"config\", \"--variables\", \"--format\", \"yaml\")\n\t\tres.Assert(t, icmd.Expected{Out: `PORT:\n    name: PORT\n    defaultvalue: \"8080\"\n    presencevalue: \"\"\n    required: false`})\n\t})\n}\n"
  },
  {
    "path": "pkg/e2e/configs_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"testing\"\n\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestConfigFromEnv(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tdefer c.cleanupWithDown(t, \"configs\")\n\n\tt.Run(\"config from file\", func(t *testing.T) {\n\t\tres := icmd.RunCmd(c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/configs/compose.yaml\", \"run\", \"from_file\"))\n\t\tres.Assert(t, icmd.Expected{Out: \"This is my config file\"})\n\t})\n\n\tt.Run(\"config from env\", func(t *testing.T) {\n\t\tres := icmd.RunCmd(c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/configs/compose.yaml\", \"run\", \"from_env\"),\n\t\t\tfunc(cmd *icmd.Cmd) {\n\t\t\t\tcmd.Env = append(cmd.Env, \"CONFIG=config\")\n\t\t\t})\n\t\tres.Assert(t, icmd.Expected{Out: \"config\"})\n\t})\n\n\tt.Run(\"config inlined\", func(t *testing.T) {\n\t\tres := icmd.RunCmd(c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/configs/compose.yaml\", \"run\", \"inlined\"),\n\t\t\tfunc(cmd *icmd.Cmd) {\n\t\t\t\tcmd.Env = append(cmd.Env, \"CONFIG=config\")\n\t\t\t})\n\t\tres.Assert(t, icmd.Expected{Out: \"This is my config\"})\n\t})\n\n\tt.Run(\"custom target\", func(t *testing.T) {\n\t\tres := icmd.RunCmd(c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/configs/compose.yaml\", \"run\", \"target\"),\n\t\t\tfunc(cmd *icmd.Cmd) {\n\t\t\t\tcmd.Env = append(cmd.Env, \"CONFIG=config\")\n\t\t\t})\n\t\tres.Assert(t, icmd.Expected{Out: \"This is my config\"})\n\t})\n}\n"
  },
  {
    "path": "pkg/e2e/container_name_test.go",
    "content": "//go:build !windows\n\n/*\n   Copyright 2022 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"testing\"\n\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestUpContainerNameConflict(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-container_name_conflict\"\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/container_name/compose.yaml\", \"--project-name\", projectName, \"up\")\n\tres.Assert(t, icmd.Expected{ExitCode: 1, Err: `container name \"test\" is already in use`})\n\n\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/container_name/compose.yaml\", \"--project-name\", projectName, \"up\", \"test\")\n\n\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/container_name/compose.yaml\", \"--project-name\", projectName, \"up\", \"another_test\")\n}\n"
  },
  {
    "path": "pkg/e2e/cp_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestCopy(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tconst projectName = \"copy_e2e\"\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/cp-test/compose.yaml\", \"--project-name\", projectName, \"down\")\n\n\t\tos.Remove(\"./fixtures/cp-test/from-default.txt\") //nolint:errcheck\n\t\tos.Remove(\"./fixtures/cp-test/from-indexed.txt\") //nolint:errcheck\n\t\tos.RemoveAll(\"./fixtures/cp-test/cp-folder2\")    //nolint:errcheck\n\t})\n\n\tt.Run(\"start service\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/cp-test/compose.yaml\", \"--project-name\", projectName, \"up\",\n\t\t\t\"--scale\", \"nginx=5\", \"-d\")\n\t})\n\n\tt.Run(\"make sure service is running\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\")\n\t\tassertServiceStatus(t, projectName, \"nginx\", \"Up\", res.Stdout())\n\t})\n\n\tt.Run(\"copy to container copies the file to the all containers by default\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/cp-test/compose.yaml\", \"-p\", projectName, \"cp\",\n\t\t\t\"./fixtures/cp-test/cp-me.txt\", \"nginx:/tmp/default.txt\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\n\t\toutput := c.RunDockerCmd(t, \"exec\", projectName+\"-nginx-1\", \"cat\", \"/tmp/default.txt\").Stdout()\n\t\tassert.Assert(t, strings.Contains(output, `hello world`), output)\n\n\t\toutput = c.RunDockerCmd(t, \"exec\", projectName+\"-nginx-2\", \"cat\", \"/tmp/default.txt\").Stdout()\n\t\tassert.Assert(t, strings.Contains(output, `hello world`), output)\n\n\t\toutput = c.RunDockerCmd(t, \"exec\", projectName+\"-nginx-3\", \"cat\", \"/tmp/default.txt\").Stdout()\n\t\tassert.Assert(t, strings.Contains(output, `hello world`), output)\n\t})\n\n\tt.Run(\"copy to container with a given index copies the file to the given container\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/cp-test/compose.yaml\", \"-p\", projectName, \"cp\", \"--index=3\",\n\t\t\t\"./fixtures/cp-test/cp-me.txt\", \"nginx:/tmp/indexed.txt\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\n\t\toutput := c.RunDockerCmd(t, \"exec\", projectName+\"-nginx-3\", \"cat\", \"/tmp/indexed.txt\").Stdout()\n\t\tassert.Assert(t, strings.Contains(output, `hello world`), output)\n\n\t\tres = c.RunDockerOrExitError(t, \"exec\", projectName+\"-nginx-2\", \"cat\", \"/tmp/indexed.txt\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 1})\n\t})\n\n\tt.Run(\"copy from a container copies the file to the host from the first container by default\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/cp-test/compose.yaml\", \"-p\", projectName, \"cp\",\n\t\t\t\"nginx:/tmp/default.txt\", \"./fixtures/cp-test/from-default.txt\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\n\t\tdata, err := os.ReadFile(\"./fixtures/cp-test/from-default.txt\")\n\t\tassert.NilError(t, err)\n\t\tassert.Equal(t, `hello world`, string(data))\n\t})\n\n\tt.Run(\"copy from a container with a given index copies the file to host\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/cp-test/compose.yaml\", \"-p\", projectName, \"cp\", \"--index=3\",\n\t\t\t\"nginx:/tmp/indexed.txt\", \"./fixtures/cp-test/from-indexed.txt\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\n\t\tdata, err := os.ReadFile(\"./fixtures/cp-test/from-indexed.txt\")\n\t\tassert.NilError(t, err)\n\t\tassert.Equal(t, `hello world`, string(data))\n\t})\n\n\tt.Run(\"copy to and from a container also work with folder\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/cp-test/compose.yaml\", \"-p\", projectName, \"cp\",\n\t\t\t\"./fixtures/cp-test/cp-folder\", \"nginx:/tmp\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\n\t\toutput := c.RunDockerCmd(t, \"exec\", projectName+\"-nginx-1\", \"cat\", \"/tmp/cp-folder/cp-me.txt\").Stdout()\n\t\tassert.Assert(t, strings.Contains(output, `hello world from folder`), output)\n\n\t\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/cp-test/compose.yaml\", \"-p\", projectName, \"cp\",\n\t\t\t\"nginx:/tmp/cp-folder\", \"./fixtures/cp-test/cp-folder2\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\n\t\tdata, err := os.ReadFile(\"./fixtures/cp-test/cp-folder2/cp-me.txt\")\n\t\tassert.NilError(t, err)\n\t\tassert.Equal(t, `hello world from folder`, string(data))\n\t})\n}\n"
  },
  {
    "path": "pkg/e2e/e2e_config_plugin.go",
    "content": "//go:build !standalone\n\n/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nconst composeStandaloneMode = false\n"
  },
  {
    "path": "pkg/e2e/e2e_config_standalone.go",
    "content": "//go:build standalone\n\n/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nconst composeStandaloneMode = true\n"
  },
  {
    "path": "pkg/e2e/env_file_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestRawEnvFile(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tdefer c.cleanupWithDown(t, \"dotenv\")\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/dotenv/raw.yaml\", \"run\", \"test\")\n\tassert.Equal(t, strings.TrimSpace(res.Stdout()), \"'{\\\"key\\\": \\\"value\\\"}'\")\n}\n\nfunc TestUnusedMissingEnvFile(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tdefer c.cleanupWithDown(t, \"unused_dotenv\")\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/env_file/compose.yaml\", \"up\", \"-d\", \"serviceA\")\n\n\t// Runtime operations should work even with missing env file\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/env_file/compose.yaml\", \"ps\")\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/env_file/compose.yaml\", \"logs\")\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/env_file/compose.yaml\", \"down\")\n}\n\nfunc TestRunEnvFile(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tdefer c.cleanupWithDown(t, \"run_dotenv\")\n\n\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"./fixtures/env_file\", \"run\", \"serviceC\", \"env\")\n\tres.Assert(t, icmd.Expected{Out: \"FOO=BAR\"})\n}\n"
  },
  {
    "path": "pkg/e2e/exec_test.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"testing\"\n\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestExec(t *testing.T) {\n\tconst projectName = \"e2e-exec\"\n\tc := NewParallelCLI(t)\n\n\tcleanup := func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--timeout=0\", \"--remove-orphans\")\n\t}\n\tt.Cleanup(cleanup)\n\tcleanup()\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/exec/compose.yaml\", \"--project-name\", projectName, \"run\", \"-d\", \"test\", \"cat\")\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"--project-name\", projectName, \"exec\", \"--index=1\", \"test\", \"ps\")\n\tres.Assert(t, icmd.Expected{Err: \"service \\\"test\\\" is not running container #1\", ExitCode: 1})\n\n\tres = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"exec\", \"test\", \"ps\")\n\tres.Assert(t, icmd.Expected{Out: \"cat\"}) // one-off container was selected\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/exec/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\n\tres = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"exec\", \"test\", \"ps\")\n\tres.Assert(t, icmd.Expected{Out: \"tail\"}) // service container was selected\n}\n"
  },
  {
    "path": "pkg/e2e/export_test.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"testing\"\n)\n\nfunc TestExport(t *testing.T) {\n\tconst projectName = \"e2e-export-service\"\n\tc := NewParallelCLI(t)\n\n\tcleanup := func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--timeout=0\", \"--remove-orphans\")\n\t}\n\tt.Cleanup(cleanup)\n\tcleanup()\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/export/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\", \"service\")\n\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"export\", \"-o\", \"service.tar\", \"service\")\n}\n\nfunc TestExportWithReplicas(t *testing.T) {\n\tconst projectName = \"e2e-export-service-with-replicas\"\n\tc := NewParallelCLI(t)\n\n\tcleanup := func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--timeout=0\", \"--remove-orphans\")\n\t}\n\tt.Cleanup(cleanup)\n\tcleanup()\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/export/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\", \"service-with-replicas\")\n\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"export\", \"-o\", \"r1.tar\", \"--index=1\", \"service-with-replicas\")\n\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"export\", \"-o\", \"r2.tar\", \"--index=2\", \"service-with-replicas\")\n}\n"
  },
  {
    "path": "pkg/e2e/expose_test.go",
    "content": "//go:build !windows\n\n/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n)\n\n// see https://github.com/docker/compose/issues/13378\nfunc TestExposeRange(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tf := filepath.Join(t.TempDir(), \"compose.yaml\")\n\terr := os.WriteFile(f, []byte(`\nname: test-expose-range\nservices:\n  test:\n    image: alpine\n    expose:\n      - \"9091-9092\"\n`), 0o644)\n\tassert.NilError(t, err)\n\n\tt.Cleanup(func() {\n\t\tc.cleanupWithDown(t, \"test-expose-range\")\n\t})\n\tc.RunDockerComposeCmd(t, \"-f\", f, \"up\")\n}\n"
  },
  {
    "path": "pkg/e2e/fixtures/attach-restart/compose.yaml",
    "content": "services:\n  failing:\n    image: alpine\n    command: sh -c \"sleep 0.1 && echo world && /bin/false\"\n    deploy:\n      restart_policy:\n        condition: \"on-failure\"\n        max_attempts: 2\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM alpine\nENV ENV_FROM_DOCKERFILE=1\nEXPOSE 8081\nCMD [\"echo\", \"Hello from Dockerfile\"]\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/compose.yaml",
    "content": "services:\n  serviceA:\n    image: alpine\n    build: .\n    ports:\n      - 80:8080\n    networks:\n      - private-network\n    configs:\n      - source: my-config\n        target: /etc/my-config1.txt\n  serviceB:\n    image: alpine\n    build: .\n    ports:\n      - 8081:8082\n    secrets:\n      - my-secrets\n    networks:\n      - private-network\n      - public-network\nconfigs:\n  my-config:\n    file: my-config.txt\nsecrets:\n  my-secrets:\n    file: not-so-secret.txt\nnetworks:\n  private-network:\n    internal: true\n  public-network: {}\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-helm/Chart.yaml",
    "content": "#! Chart.yaml\napiVersion: v2\nname: bridge\nversion: 0.0.1\n# kubeVersion: >= 1.29.1\ndescription: A generated Helm Chart for bridge generated via compose-bridge.\ntype: application\nkeywords:\n    - bridge\nappVersion: 'v0.0.1'\nsources:\nannotations:\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-helm/templates/0-bridge-namespace.yaml",
    "content": "#! 0-bridge-namespace.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Namespace\nmetadata:\n    name: {{ .Values.namespace }}\n    labels:\n        com.docker.compose.project: bridge\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-helm/templates/bridge-configs.yaml",
    "content": "#! bridge-configs.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: ConfigMap\nmetadata:\n    name: {{ .Values.projectName }}\n    namespace: {{ .Values.namespace }}\n    labels:\n        com.docker.compose.project: bridge\ndata:\n    my-config: |\n        My config file\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-helm/templates/my-secrets-secret.yaml",
    "content": "#! my-secrets-secret.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Secret\nmetadata:\n    name: my-secrets\n    namespace: {{ .Values.namespace }}\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.secret: my-secrets\ndata:\n    my-secrets: bm90LXNlY3JldA==\ntype: Opaque\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-helm/templates/private-network-network-policy.yaml",
    "content": "#! private-network-network-policy.yaml\n# Generated code, do not edit\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n    name: private-network-network-policy\n    namespace: {{ .Values.namespace }}\nspec:\n    podSelector:\n        matchLabels:\n            com.docker.compose.network.private-network: \"true\"\n    policyTypes:\n        - Ingress\n        - Egress\n    ingress:\n        - from:\n            - podSelector:\n                matchLabels:\n                    com.docker.compose.network.private-network: \"true\"\n    egress:\n        - to:\n            - podSelector:\n                matchLabels:\n                    com.docker.compose.network.private-network: \"true\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-helm/templates/public-network-network-policy.yaml",
    "content": "#! public-network-network-policy.yaml\n# Generated code, do not edit\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n    name: public-network-network-policy\n    namespace: {{ .Values.namespace }}\nspec:\n    podSelector:\n        matchLabels:\n            com.docker.compose.network.public-network: \"true\"\n    policyTypes:\n        - Ingress\n        - Egress\n    ingress:\n        - from:\n            - podSelector:\n                matchLabels:\n                    com.docker.compose.network.public-network: \"true\"\n    egress:\n        - to:\n            - podSelector:\n                matchLabels:\n                    com.docker.compose.network.public-network: \"true\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-helm/templates/serviceA-deployment.yaml",
    "content": "#! serviceA-deployment.yaml\n# Generated code, do not edit\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n    name: servicea\n    namespace: {{ .Values.namespace }}\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceA\n        app.kubernetes.io/managed-by: Helm\nspec:\n    replicas: {{ .Values.deployment.defaultReplicas }}\n    selector:\n        matchLabels:\n            com.docker.compose.project: bridge\n            com.docker.compose.service: serviceA\n    strategy:\n        type: {{ .Values.deployment.strategy }}\n    template:\n        metadata:\n            labels:\n                com.docker.compose.project: bridge\n                com.docker.compose.service: serviceA\n                com.docker.compose.network.private-network: \"true\"\n        spec:\n            containers:\n                - name: servicea\n                  image: {{ .Values.serviceA.image }}\n                  imagePullPolicy: {{ .Values.serviceA.imagePullPolicy }}\n                  resources:\n                    limits:\n                        cpu: {{ .Values.resources.defaultCpuLimit }}\n                        memory: {{ .Values.resources.defaultMemoryLimit }}\n                  ports:\n                    - name: servicea-8080\n                      containerPort: 8080\n                  volumeMounts:\n                    - name: etc-my-config1-txt\n                      mountPath: /etc/my-config1.txt\n                      subPath: my-config\n                      readOnly: true\n            volumes:\n                - name: etc-my-config1-txt\n                  configMap:\n                    name: {{ .Values.projectName }}\n                    items:\n                        - key: my-config\n                          path: my-config\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-helm/templates/serviceA-expose.yaml",
    "content": "#! serviceA-expose.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Service\nmetadata:\n    name: servicea\n    namespace: {{ .Values.namespace }}\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceA\n        app.kubernetes.io/managed-by: Helm\nspec:\n    selector:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceA\n    ports:\n        - name: servicea-8080\n          port: 8080\n          targetPort: servicea-8080\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-helm/templates/serviceA-service.yaml",
    "content": "# check if there is at least one published port\n\n#! serviceA-service.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Service\nmetadata:\n    name: servicea-published\n    namespace: {{ .Values.namespace }}\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceA\n        app.kubernetes.io/managed-by: Helm\nspec:\n    type: {{ .Values.service.type }}\n    selector:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceA\n    ports:\n        - name: servicea-80\n          port: 80\n          protocol: TCP\n          targetPort: servicea-8080\n\n# check if there is at least one published port\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-helm/templates/serviceB-deployment.yaml",
    "content": "#! serviceB-deployment.yaml\n# Generated code, do not edit\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n    name: serviceb\n    namespace: {{ .Values.namespace }}\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceB\n        app.kubernetes.io/managed-by: Helm\nspec:\n    replicas: {{ .Values.deployment.defaultReplicas }}\n    selector:\n        matchLabels:\n            com.docker.compose.project: bridge\n            com.docker.compose.service: serviceB\n    strategy:\n        type: {{ .Values.deployment.strategy }}\n    template:\n        metadata:\n            labels:\n                com.docker.compose.project: bridge\n                com.docker.compose.service: serviceB\n                com.docker.compose.network.private-network: \"true\"\n                com.docker.compose.network.public-network: \"true\"\n        spec:\n            containers:\n                - name: serviceb\n                  image: {{ .Values.serviceB.image }}\n                  imagePullPolicy: {{ .Values.serviceB.imagePullPolicy }}\n                  resources:\n                    limits:\n                        cpu: {{ .Values.resources.defaultCpuLimit }}\n                        memory: {{ .Values.resources.defaultMemoryLimit }}\n                  ports:\n                    - name: serviceb-8082\n                      containerPort: 8082\n                  volumeMounts:\n                    - name: run-secrets-my-secrets\n                      mountPath: /run/secrets/my-secrets\n                      subPath: my-secrets\n                      readOnly: true\n            volumes:\n                - name: run-secrets-my-secrets\n                  secret:\n                    secretName: my-secrets\n                    items:\n                        - key: my-secrets\n                          path: my-secrets\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-helm/templates/serviceB-expose.yaml",
    "content": "#! serviceB-expose.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Service\nmetadata:\n    name: serviceb\n    namespace: {{ .Values.namespace }}\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceB\n        app.kubernetes.io/managed-by: Helm\nspec:\n    selector:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceB\n    ports:\n        - name: serviceb-8082\n          port: 8082\n          targetPort: serviceb-8082\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-helm/templates/serviceB-service.yaml",
    "content": "#! serviceB-service.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Service\nmetadata:\n    name: serviceb-published\n    namespace: {{ .Values.namespace }}\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceB\n        app.kubernetes.io/managed-by: Helm\nspec:\n    type: {{ .Values.service.type }}\n    selector:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceB\n    ports:\n        - name: serviceb-8081\n          port: 8081\n          protocol: TCP\n          targetPort: serviceb-8082\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-helm/values.yaml",
    "content": "#! values.yaml\n# Project Name\nprojectName: bridge\n# Namespace\nnamespace: bridge\n# Default deployment settings\ndeployment:\n    strategy: Recreate\n    defaultReplicas: 1\n# Default resource limits\nresources:\n    defaultCpuLimit: \"100m\"\n    defaultMemoryLimit: \"512Mi\"\n# Service settings\nservice:\n    type: LoadBalancer\n# Storage settings\nstorage:\n    defaultStorageClass: \"hostpath\"\n    defaultSize: \"100Mi\"\n    defaultAccessMode: \"ReadWriteOnce\"\n# Services variables\nserviceA:\n    image: alpine\n    imagePullPolicy: IfNotPresent\nserviceB:\n    image: alpine\n    imagePullPolicy: IfNotPresent\n\n# You can apply the same logic to loop on networks, volumes, secrets and configs...\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/base/0-bridge-namespace.yaml",
    "content": "#! 0-bridge-namespace.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Namespace\nmetadata:\n    name: bridge\n    labels:\n        com.docker.compose.project: bridge\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/base/bridge-configs.yaml",
    "content": "#! bridge-configs.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: ConfigMap\nmetadata:\n    name: bridge\n    namespace: bridge\n    labels:\n        com.docker.compose.project: bridge\ndata:\n    my-config: |\n        My config file\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/base/kustomization.yaml",
    "content": "#! kustomization.yaml\n# Generated code, do not edit\napiVersion: kustomize.config.k8s.io/v1beta1\nkind: Kustomization\nresources:\n    - 0-bridge-namespace.yaml\n    - bridge-configs.yaml\n    - my-secrets-secret.yaml\n    - private-network-network-policy.yaml\n    - public-network-network-policy.yaml\n    - serviceA-deployment.yaml\n    - serviceA-expose.yaml\n    - serviceA-service.yaml\n    - serviceB-deployment.yaml\n    - serviceB-expose.yaml\n    - serviceB-service.yaml\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/base/my-secrets-secret.yaml",
    "content": "#! my-secrets-secret.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Secret\nmetadata:\n    name: my-secrets\n    namespace: bridge\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.secret: my-secrets\ndata:\n    my-secrets: bm90LXNlY3JldA==\ntype: Opaque\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/base/private-network-network-policy.yaml",
    "content": "#! private-network-network-policy.yaml\n# Generated code, do not edit\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n    name: private-network-network-policy\n    namespace: bridge\nspec:\n    podSelector:\n        matchLabels:\n            com.docker.compose.network.private-network: \"true\"\n    policyTypes:\n        - Ingress\n        - Egress\n    ingress:\n        - from:\n            - podSelector:\n                matchLabels:\n                    com.docker.compose.network.private-network: \"true\"\n    egress:\n        - to:\n            - podSelector:\n                matchLabels:\n                    com.docker.compose.network.private-network: \"true\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/base/public-network-network-policy.yaml",
    "content": "#! public-network-network-policy.yaml\n# Generated code, do not edit\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n    name: public-network-network-policy\n    namespace: bridge\nspec:\n    podSelector:\n        matchLabels:\n            com.docker.compose.network.public-network: \"true\"\n    policyTypes:\n        - Ingress\n        - Egress\n    ingress:\n        - from:\n            - podSelector:\n                matchLabels:\n                    com.docker.compose.network.public-network: \"true\"\n    egress:\n        - to:\n            - podSelector:\n                matchLabels:\n                    com.docker.compose.network.public-network: \"true\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/base/serviceA-deployment.yaml",
    "content": "#! serviceA-deployment.yaml\n# Generated code, do not edit\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n    name: servicea\n    namespace: bridge\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceA\nspec:\n    replicas: 1\n    selector:\n        matchLabels:\n            com.docker.compose.project: bridge\n            com.docker.compose.service: serviceA\n    strategy:\n        type: Recreate\n    template:\n        metadata:\n            labels:\n                com.docker.compose.project: bridge\n                com.docker.compose.service: serviceA\n                com.docker.compose.network.private-network: \"true\"\n        spec:\n            containers:\n                - name: servicea\n                  image: alpine\n                  imagePullPolicy: IfNotPresent\n                  ports:\n                    - name: servicea-8080\n                      containerPort: 8080\n                  volumeMounts:\n                    - name: etc-my-config1-txt\n                      mountPath: /etc/my-config1.txt\n                      subPath: my-config\n                      readOnly: true\n            volumes:\n                - name: etc-my-config1-txt\n                  configMap:\n                    name: bridge\n                    items:\n                        - key: my-config\n                          path: my-config\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/base/serviceA-expose.yaml",
    "content": "#! serviceA-expose.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Service\nmetadata:\n    name: servicea\n    namespace: bridge\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceA\nspec:\n    selector:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceA\n    ports:\n        - name: servicea-8080\n          port: 8080\n          targetPort: servicea-8080\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/base/serviceA-service.yaml",
    "content": "# check if there is at least one published port\n\n#! serviceA-service.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Service\nmetadata:\n    name: servicea-published\n    namespace: bridge\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceA\nspec:\n    selector:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceA\n    ports:\n        - name: servicea-80\n          port: 80\n          protocol: TCP\n          targetPort: servicea-8080\n\n# check if there is at least one published port\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/base/serviceB-deployment.yaml",
    "content": "#! serviceB-deployment.yaml\n# Generated code, do not edit\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n    name: serviceb\n    namespace: bridge\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceB\nspec:\n    replicas: 1\n    selector:\n        matchLabels:\n            com.docker.compose.project: bridge\n            com.docker.compose.service: serviceB\n    strategy:\n        type: Recreate\n    template:\n        metadata:\n            labels:\n                com.docker.compose.project: bridge\n                com.docker.compose.service: serviceB\n                com.docker.compose.network.private-network: \"true\"\n                com.docker.compose.network.public-network: \"true\"\n        spec:\n            containers:\n                - name: serviceb\n                  image: alpine\n                  imagePullPolicy: IfNotPresent\n                  ports:\n                    - name: serviceb-8082\n                      containerPort: 8082\n                  volumeMounts:\n                    - name: run-secrets-my-secrets\n                      mountPath: /run/secrets/my-secrets\n                      subPath: my-secrets\n                      readOnly: true\n            volumes:\n                - name: run-secrets-my-secrets\n                  secret:\n                    secretName: my-secrets\n                    items:\n                        - key: my-secrets\n                          path: my-secrets\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/base/serviceB-expose.yaml",
    "content": "#! serviceB-expose.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Service\nmetadata:\n    name: serviceb\n    namespace: bridge\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceB\nspec:\n    selector:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceB\n    ports:\n        - name: serviceb-8082\n          port: 8082\n          targetPort: serviceb-8082\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/base/serviceB-service.yaml",
    "content": "#! serviceB-service.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Service\nmetadata:\n    name: serviceb-published\n    namespace: bridge\n    labels:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceB\nspec:\n    selector:\n        com.docker.compose.project: bridge\n        com.docker.compose.service: serviceB\n    ports:\n        - name: serviceb-8081\n          port: 8081\n          protocol: TCP\n          targetPort: serviceb-8082\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/overlays/desktop/kustomization.yaml",
    "content": "#! kustomization.yaml\n# Generated code, do not edit\napiVersion: kustomize.config.k8s.io/v1beta1\nkind: Kustomization\nresources:\n    - ../../base\npatches:\n    - path: serviceA-service.yaml\n    - path: serviceB-service.yaml\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/overlays/desktop/serviceA-service.yaml",
    "content": "# check if there is at least one published port\n\n#! serviceA-service.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Service\nmetadata:\n    name: servicea-published\n    namespace: bridge\nspec:\n    type: LoadBalancer\n\n# check if there is at least one published port\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/expected-kubernetes/overlays/desktop/serviceB-service.yaml",
    "content": "#! serviceB-service.yaml\n# Generated code, do not edit\napiVersion: v1\nkind: Service\nmetadata:\n    name: serviceb-published\n    namespace: bridge\nspec:\n    type: LoadBalancer\n"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/my-config.txt",
    "content": "My config file"
  },
  {
    "path": "pkg/e2e/fixtures/bridge/not-so-secret.txt",
    "content": "not-secret"
  },
  {
    "path": "pkg/e2e/fixtures/build-dependencies/base.dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM alpine\n\nCOPY hello.txt /hello.txt\n\nCMD [ \"/bin/true\" ]\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-dependencies/classic.yaml",
    "content": "services:\n  base:\n    image: base\n    init: true\n    build:\n      context: .\n      dockerfile: base.dockerfile\n  service:\n    init: true\n    depends_on:\n      - base\n    build:\n      context: .\n      dockerfile: service.dockerfile\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-dependencies/compose-depends_on.yaml",
    "content": "services:\n  test1:\n    pull_policy: build\n    build:\n      dockerfile_inline: FROM alpine\n    command:\n      - echo\n      - \"test 1 success\"\n  test2:\n    image: alpine\n    depends_on:\n      - test1\n    command:\n      - echo\n      - \"test 2 success\""
  },
  {
    "path": "pkg/e2e/fixtures/build-dependencies/compose.yaml",
    "content": "services:\n  base:\n    init: true\n    build:\n      context: .\n      dockerfile: base.dockerfile\n  service:\n    init: true\n    build:\n      context: .\n      additional_contexts:\n        base: \"service:base\"\n      dockerfile: service.dockerfile\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-dependencies/hello.txt",
    "content": "this file was copied from base -> service\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-dependencies/service.dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM base\n\nCMD [ \"cat\", \"/hello.txt\" ]\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-infinite/compose.yaml",
    "content": "services:\n    service1:\n        build: service1"
  },
  {
    "path": "pkg/e2e/fixtures/build-infinite/service1/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM busybox\n\nRUN sleep infinity"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/compose.yaml",
    "content": "services:\n  nginx:\n    build: nginx-build\n    ports:\n      - 8070:80\n\n  nginx2:\n    build: nginx-build2\n    image: custom-nginx\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/dependencies/compose.yaml",
    "content": "services:\n  firstbuild:\n    build:\n      dockerfile_inline: |\n        FROM alpine\n      additional_contexts:\n        dep1: service:dep1\n    entrypoint: [\"echo\", \"Hello from firstbuild\"]\n    depends_on:\n      - dep1\n\n  secondbuild:\n    build:\n      dockerfile_inline: |\n        FROM alpine\n      additional_contexts:\n        dep1: service:dep1\n    entrypoint: [\"echo\", \"Hello from secondbuild\"]\n    depends_on:\n      - dep1\n\n  dep1:\n    build:\n      dockerfile_inline: |\n        FROM alpine\n    entrypoint: [\"echo\", \"Hello from dep1\"]"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/entitlements/Dockerfile",
    "content": "# syntax = docker/dockerfile:experimental\n\n\n#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM alpine\nRUN --security=insecure cat /proc/self/status | grep CapEff\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/entitlements/compose.yaml",
    "content": "services:\n  privileged-service:\n    build:\n      context: .\n      entitlements:\n        - security.insecure\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/escaped/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM alpine\nARG foo\nRUN echo foo is $foo\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/escaped/compose.yaml",
    "content": "services:\n  foo:\n    build:\n      context: .\n      args:\n        foo: $${bar}\n\n  echo:\n    build:\n      dockerfile_inline: |\n        FROM bash\n        RUN <<'EOF'\n        echo $(seq 10)\n        EOF\n\n  arg:\n    build:\n      args:\n        BOOL: \"true\"\n      dockerfile_inline: |\n        FROM alpine:latest\n        ARG BOOL\n        RUN /bin/$${BOOL}\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/long-output-line/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\nFROM alpine\n# We generate warnings *on purpose* to bloat the JSON output of bake\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT\nARG AWS_SECRET_ACCESS_KEY=FAKE_TO_GENERATE_WARNING_OUTPUT"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/long-output-line/compose.yaml",
    "content": "services:\n    long-line:\n      build:\n        context: .\n        dockerfile: Dockerfile\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/minimal/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM scratch\nCOPY . ."
  },
  {
    "path": "pkg/e2e/fixtures/build-test/minimal/compose.yaml",
    "content": "services:\n  test:\n    build: ."
  },
  {
    "path": "pkg/e2e/fixtures/build-test/multi-args/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nARG IMAGE=666\nARG TAG=666\n\nFROM ${IMAGE}:${TAG}\nRUN echo \"SUCCESS\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/multi-args/compose.yaml",
    "content": "services:\n  multiargs:\n    build:\n      context: .\n      args:\n        IMAGE: alpine\n        TAG: latest\n      labels:\n        - RESULT=SUCCESS\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/nginx-build/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM nginx:alpine\n\nARG FOO\nLABEL FOO=$FOO\nCOPY static /usr/share/nginx/html\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/nginx-build/static/index.html",
    "content": "<!doctype html>\n<html lang=\"en\">\n<head>\n  <meta charset=\"utf-8\">\n  <title>Static file 2</title>\n</head>\n<body>\n  <h2>Hello from Nginx container</h2>\n</body>\n</html>\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/nginx-build2/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM nginx:alpine\n\nCOPY static2 /usr/share/nginx/html\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/nginx-build2/static2/index.html",
    "content": "<!doctype html>\n<html lang=\"en\">\n<head>\n  <meta charset=\"utf-8\">\n  <title>Static file 2</title>\n</head>\n<body>\n  <h2>Hello from Nginx container</h2>\n</body>\n</html>\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/platforms/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM --platform=$BUILDPLATFORM golang:alpine AS build\n\nARG TARGETPLATFORM\nARG BUILDPLATFORM\nRUN echo \"I am building for $TARGETPLATFORM, running on $BUILDPLATFORM\" > /log\n\nFROM alpine\nCOPY --from=build /log /log\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/platforms/compose-multiple-platform-builds.yaml",
    "content": "services:\n  serviceA:\n    image: build-test-platform-a:test\n    build:\n      context: ./contextServiceA\n      platforms:\n        - linux/amd64\n        - linux/arm64\n  serviceB:\n    image: build-test-platform-b:test\n    build:\n      context: ./contextServiceB\n      platforms:\n        - linux/amd64\n        - linux/arm64\n  serviceC:\n    image: build-test-platform-c:test\n    build:\n      context: ./contextServiceC\n      platforms:\n        - linux/amd64\n        - linux/arm64\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/platforms/compose-service-platform-and-no-build-platforms.yaml",
    "content": "services:\n  platforms:\n    image: build-test-platform:test\n    platform: linux/386\n    build:\n      context: .\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/platforms/compose-service-platform-not-in-build-platforms.yaml",
    "content": "services:\n  platforms:\n    image: build-test-platform:test\n    platform: linux/riscv64\n    build:\n      context: .\n      platforms:\n        - linux/amd64\n        - linux/arm64\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/platforms/compose-unsupported-platform.yml",
    "content": "services:\n  platforms:\n    image: build-test-platform:test\n    build:\n      context: .\n      platforms:\n        - unsupported/unsupported\n        - linux/amd64\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/platforms/compose.yaml",
    "content": "services:\n  platforms:\n    image: build-test-platform:test\n    build:\n      context: .\n      platforms:\n        - linux/amd64\n        - linux/arm64\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/platforms/contextServiceA/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM --platform=$BUILDPLATFORM golang:alpine AS build\n\nARG TARGETPLATFORM\nARG BUILDPLATFORM\nRUN echo \"I'm Service A and I am building for $TARGETPLATFORM, running on $BUILDPLATFORM\" > /log\n\nFROM alpine\nCOPY --from=build /log /log\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/platforms/contextServiceB/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM --platform=$BUILDPLATFORM golang:alpine AS build\n\nARG TARGETPLATFORM\nARG BUILDPLATFORM\nRUN echo \"I'm Service B and I am building for $TARGETPLATFORM, running on $BUILDPLATFORM\" > /log\n\nFROM alpine\nCOPY --from=build /log /log\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/platforms/contextServiceC/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM --platform=$BUILDPLATFORM golang:alpine AS build\n\nARG TARGETPLATFORM\nARG BUILDPLATFORM\nRUN echo \"I'm Service C and I am building for $TARGETPLATFORM, running on $BUILDPLATFORM\" > /log\n\nFROM alpine\nCOPY --from=build /log /log\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/privileged/Dockerfile",
    "content": "# syntax = docker/dockerfile:experimental\n\n\n#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM alpine\nRUN --security=insecure cat /proc/self/status | grep CapEff\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/privileged/compose.yaml",
    "content": "services:\n  privileged-service:\n    build:\n      context: .\n      privileged: true\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/profiles/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\n\nFROM alpine\nRUN --mount=type=secret,id=test-secret ls -la /run/secrets/; cp /run/secrets/test-secret /tmp\n\nCMD [\"cat\", \"/tmp/test-secret\"]"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/profiles/compose.yaml",
    "content": "secrets:\n  test-secret:\n    file: test-secret.txt\n\nservices:\n  secret-build-test:\n    profiles: [\"test\"]\n    build:\n      context: .\n      dockerfile: Dockerfile\n      secrets:\n        - test-secret"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/profiles/test-secret.txt",
    "content": "SECRET"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/secrets/Dockerfile",
    "content": "# syntax=docker/dockerfile:1\n\n\n#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM alpine\n\nRUN echo \"foo\" > /tmp/expected\nRUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret > /tmp/actual\nRUN diff /tmp/expected /tmp/actual\n\nRUN echo \"bar\" > /tmp/expected\nRUN --mount=type=secret,id=build_secret cat /run/secrets/build_secret > tmp/actual\nRUN diff --ignore-all-space /tmp/expected /tmp/actual\n\nRUN echo \"zot\" > /tmp/expected\nRUN --mount=type=secret,id=dotenvsecret cat /run/secrets/dotenvsecret > tmp/actual\nRUN diff --ignore-all-space /tmp/expected /tmp/actual\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/secrets/compose.yml",
    "content": "services:\n  ssh:\n    image: build-test-secret\n    build:\n      context: .\n      secrets:\n        - mysecret\n        - dotenvsecret\n        - source: envsecret\n          target: build_secret\n\nsecrets:\n  mysecret:\n    file: ./secret.txt\n  envsecret:\n    environment: SOME_SECRET\n  dotenvsecret:\n    environment: ANOTHER_SECRET\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/secrets/secret.txt",
    "content": "foo\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/ssh/Dockerfile",
    "content": "# syntax=docker/dockerfile:1\n\n\n#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM alpine\nRUN apk add --no-cache openssh-client\n\nWORKDIR /compose\nCOPY fake_rsa.pub /compose/\n\nRUN --mount=type=ssh,id=fake-ssh,required=true diff <(ssh-add -L) <(cat /compose/fake_rsa.pub)"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/ssh/compose-without-ssh.yaml",
    "content": "services:\n  ssh:\n    image: build-test-ssh\n    build:\n      context: .\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/ssh/compose.yaml",
    "content": "services:\n  ssh:\n    image: build-test-ssh\n    build:\n      context: .\n      ssh:\n        - fake-ssh=./fake_rsa\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/ssh/fake_rsa",
    "content": "-----BEGIN OPENSSH PRIVATE KEY-----\nb3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAACFwAAAAdzc2gtcn\nNhAAAAAwEAAQAAAgEA7nJ4xAhJ7VwI63tuay3DCHaTXeEY92H6YNZ8ptAIBY0mUn6Gc9ms\n94HvcAKemCJkO0fy6U2JOoST+q1YPAJf86NrIU41hZdzrw2QdqG/A3ja4VTAaOJbH9wafK\nHpWLs6kyigGti3KSBabm4HARU8lgtRE6AuCC1+mw821FzTsMWMxRp/rKVxgsiMUsdd57WR\nKOdn8TRm6NHcEsy7X7zAJ7+Ch/muGGCCk3Z9+YUzoVVtY/wGYmWXXj/NUzxnEq0XLyO8HC\n+QU/9dWlh1OLmoMuxN1lYtHRFWWstCboKNsOcIiJsLKfQ1t4z4jXq5P7JTLE5Pngemrr4x\nK21RFjVaGQpOjyQgZn1o0wAvy78KORwgN0Elwcb/XIKJepzzezCIyXlSafeXuHP+oMjM2s\n2MXNHlMKv6Jwh4QYwUQ61+bAcPkcmIdltiAMNLcxYiqEud85EQQl9ciuhMKa0bZl1OEILw\nVSIasEu9BEKVrz52ZZVLGMchqOV/4f1PqPEagnfnRYEttJ6AuaYUaJXvSQP6Zj4AFb6WrP\nwEBIFOuAH9i4WtG52QAK6uc1wsPZlHm8J+VnTEBKFuGERu/uJBWPo43Lju8VrHuZU8QeON\nERKfJbc1EI9XpqWi+3VcWT0QJtxEGW2YmD505+cKNc31xwOtcqwogtwT0wnuj0BAf33HY3\n8AAAc465v1nOub9ZwAAAAHc3NoLXJzYQAAAgEA7nJ4xAhJ7VwI63tuay3DCHaTXeEY92H6\nYNZ8ptAIBY0mUn6Gc9ms94HvcAKemCJkO0fy6U2JOoST+q1YPAJf86NrIU41hZdzrw2Qdq\nG/A3ja4VTAaOJbH9wafKHpWLs6kyigGti3KSBabm4HARU8lgtRE6AuCC1+mw821FzTsMWM\nxRp/rKVxgsiMUsdd57WRKOdn8TRm6NHcEsy7X7zAJ7+Ch/muGGCCk3Z9+YUzoVVtY/wGYm\nWXXj/NUzxnEq0XLyO8HC+QU/9dWlh1OLmoMuxN1lYtHRFWWstCboKNsOcIiJsLKfQ1t4z4\njXq5P7JTLE5Pngemrr4xK21RFjVaGQpOjyQgZn1o0wAvy78KORwgN0Elwcb/XIKJepzzez\nCIyXlSafeXuHP+oMjM2s2MXNHlMKv6Jwh4QYwUQ61+bAcPkcmIdltiAMNLcxYiqEud85EQ\nQl9ciuhMKa0bZl1OEILwVSIasEu9BEKVrz52ZZVLGMchqOV/4f1PqPEagnfnRYEttJ6Aua\nYUaJXvSQP6Zj4AFb6WrPwEBIFOuAH9i4WtG52QAK6uc1wsPZlHm8J+VnTEBKFuGERu/uJB\nWPo43Lju8VrHuZU8QeONERKfJbc1EI9XpqWi+3VcWT0QJtxEGW2YmD505+cKNc31xwOtcq\nwogtwT0wnuj0BAf33HY38AAAADAQABAAACAGK7A0YoKHQfp5HZid7XE+ptLpewnKXR69os\n9XAcszWZPETsHr/ZYcUaCApZC1Hy642gPPRdJnUUcDFblS1DzncTM0iXGZI3I69X7nkwf+\nbwI7EpZoIHN7P5bv4sDHKxE4/bQm/bS/u7abZP2JaaNHvsM6XsrSK1s7aAljNYPE71fVQf\npL3Xwyhj4bZk1n0asQA+0MsO541/V6BxJSR/AxFyOpoSyANP8sEcTw0CGl6zAJhlwj770b\nE0uc+9MvCIuxDJuxnwl9Iv6nd+KQtT1FFBhvk4tXVTuG3fu6IGbKTTBLWLfRPiClv2AvSR\n3CKDs+ykgFLu2BWCqtlQakLH1IW9DTkPExV4ZjkGCRWHEvmJxxOqL6B48tBjwa5gBuPJRA\naYRi15Z3sprsqCBfp+aHPkMXkkNGSe5ROj8lFFY/f50ZS/9DSlyuUURFLtIGe5XuPNJk7L\nxJkYJAdNbgvk4IPgzsU2EuYvSja5mtuo3dVyEIRtsIAN4xl01edDAxHEow6ar4gZCtXnBb\nWqeqchEi4zVTdkkuDP3SF362pktdY7Op0mS/yFd8LFrca3VCy2PqNhKvlxClRqM9Tlp9cY\nqDuyS9AGT1QO4BMtvSJGFa3P+h76rQsNldC+nGa4wNWvpAUcT5NS8W9QnGp7ah/qOK07t7\nfwYYENeRaAK3OItBABAAABAFjyDlnERaZ+/23B+zN0kQhCvmiNS5HE2+ooR5ofX08F3Uar\nVPevy9p6s2LA+AlXY1ZZ1k0p5MI+4TkAbcB/VXaxrRUw9633p9rAgyumFGhK3i0M4whOCO\nMJxmlp5sz5Qea+YzIa9z0F4ZwwvdHt7cp5joYBZoQ+Kv9OUy4xCs1zZ4ZbEsakGBrtLiTo\nH3odXSg0mXQf10Ae3WkvAJ8M1xL/z1ryFeCvyv1sGwEx+5gvmZ6nnuJEEuXUBlpOwhPlST\n4X9VL7gmdH9OoHnhUn3q2JEBQdVTegGij9wvoYT1bdzwBN/Amisn29K9w1aNdrNbYUJ6PO\n0kE2lotSJ11qD8MAAAEBAP6IRuU25yj7zv0mEsaRWoQ5v3fYKKn4C6Eg3DbzKXybZkLyX7\n6QlyO7uWf54SdXM7sQW8KoXaMu9qbo/o+4o3m7YfOY1MYeTz3yICYObVA7Fc9ZHwKzc1PB\ndFNzy6/G+2niNQF3Q1Fjp31Ve9LwKJK8Kj/eUYZ3QiUIropkw4ppA8q3h+nkVGS23xSrTM\nkGLugBjcnWUfuN0tKx/b5mqziRoyzr5u0qzFDtx97QAyETo/onFrd1bMGED2BHVyrCwtqI\np6SXo2uFzwm/nLtOMlmfpixNcK6dtql/brx3Lsu18a+0a42O5Q/TYRdRq8D60O16rUS/LN\nsFOjIYSA3spnUAAAEBAO/Sc3NTarFylk+yhOTE8G9xDt5ndbY0gsfhM9D4byKlY4yYIvs+\nyQAq3UHgSoN2f087zNubXSNiLJ8TOIPpbk8MzdvjqcpmnBhHcd4V2FLe9+hC8zEBf8MPPf\nCy1kXdCZ0bDMLTdgONiDTIc/0YXhFLZherXNIF1o/7Pcnu6IPwMDl/gcG3H1ncDxaLqxAm\nL29SDXLX2hH9k+YJr9kFaho7PZBAwNYnMooupROSbQ9/lmfCt09ep/83n5G0mo93uGkyV2\n1wcQw9X2ZT8eVHZ4ni3ACC6VYbUn2M3Z+e3tpGaYzKXd/yq0YyppoRvEaxM/ewXappUJul\nXsd/RqSc66MAAAAAAQID\n-----END OPENSSH PRIVATE KEY-----\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/ssh/fake_rsa.pub",
    "content": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDucnjECEntXAjre25rLcMIdpNd4Rj3Yfpg1nym0AgFjSZSfoZz2az3ge9wAp6YImQ7R/LpTYk6hJP6rVg8Al/zo2shTjWFl3OvDZB2ob8DeNrhVMBo4lsf3Bp8oelYuzqTKKAa2LcpIFpubgcBFTyWC1EToC4ILX6bDzbUXNOwxYzFGn+spXGCyIxSx13ntZEo52fxNGbo0dwSzLtfvMAnv4KH+a4YYIKTdn35hTOhVW1j/AZiZZdeP81TPGcSrRcvI7wcL5BT/11aWHU4uagy7E3WVi0dEVZay0Jugo2w5wiImwsp9DW3jPiNerk/slMsTk+eB6auvjErbVEWNVoZCk6PJCBmfWjTAC/Lvwo5HCA3QSXBxv9cgol6nPN7MIjJeVJp95e4c/6gyMzazYxc0eUwq/onCHhBjBRDrX5sBw+RyYh2W2IAw0tzFiKoS53zkRBCX1yK6EwprRtmXU4QgvBVIhqwS70EQpWvPnZllUsYxyGo5X/h/U+o8RqCd+dFgS20noC5phRole9JA/pmPgAVvpas/AQEgU64Af2Lha0bnZAArq5zXCw9mUebwn5WdMQEoW4YRG7+4kFY+jjcuO7xWse5lTxB440REp8ltzUQj1empaL7dVxZPRAm3EQZbZiYPnTn5wo1zfXHA61yrCiC3BPTCe6PQEB/fcdjfw== \n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/sub-dependencies/compose.yaml",
    "content": "services:\n  main:\n    build:\n      dockerfile_inline: |\n        FROM alpine\n      additional_contexts:\n        dep1: service:dep1\n        dep2: service:dep2\n    entrypoint: [\"echo\", \"Hello from main\"]\n\n  dep1:\n    build:\n      dockerfile_inline: |\n        FROM alpine\n      additional_contexts:\n        subdep1: service:subdep1\n        subdep2: service:subdep2\n    entrypoint: [\"echo\", \"Hello from dep1\"]\n\n  dep2:\n    build:\n      dockerfile_inline: |\n        FROM alpine\n    entrypoint: [\"echo\", \"Hello from dep2\"]\n\n  subdep1:\n    build:\n      dockerfile_inline: |\n        FROM alpine\n    entrypoint: [\"echo\", \"Hello from subdep1\"]\n\n  subdep2:\n    build:\n      dockerfile_inline: |\n        FROM alpine\n    entrypoint: [\"echo\", \"Hello from subdep2\"]"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/subset/compose.yaml",
    "content": "services:\n  main:\n    build:\n      dockerfile_inline: |\n        FROM alpine\n    entrypoint: [\"echo\", \"Hello from main\"]\n    depends_on:\n      - dep1\n\n  dep1:\n    build:\n      dockerfile_inline: |\n        FROM alpine\n    entrypoint: [\"echo\", \"Hello from dep1\"]"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/tags/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM nginx:alpine\n\nRUN echo \"SUCCESS\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/build-test/tags/compose.yaml",
    "content": "services:\n  nginx:\n    image: build-test-tags\n    build:\n      context: .\n      tags:\n        - docker.io/docker/build-test-tags:1.0.0\n        - other-image-name:v1.0.0\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/cascade/compose.yaml",
    "content": "services:\n  running:\n    image: alpine\n    command: sleep infinity\n    init: true\n\n  exit:\n    image: alpine\n    command: /bin/true\n    depends_on:\n      running:\n        condition: service_started\n\n  fail:\n    image: alpine\n    command: sh -c \"return 111\"\n    depends_on:\n      exit:\n        condition: service_completed_successfully\n"
  },
  {
    "path": "pkg/e2e/fixtures/commit/compose.yaml",
    "content": "services:\n  service:\n    image: alpine\n    command: sleep infinity\n  service-with-replicas:\n    image: alpine\n    command: sleep infinity\n    deploy:\n      replicas: 3"
  },
  {
    "path": "pkg/e2e/fixtures/compose-pull/duplicate-images/compose.yaml",
    "content": "services:\n  simple:\n    image: alpine:3.13\n    command: top\n  another:\n    image: alpine:3.13\n    command: top\n"
  },
  {
    "path": "pkg/e2e/fixtures/compose-pull/image-present-locally/compose.yaml",
    "content": "services:\n  simple:\n    image: alpine:3.13.12\n    pull_policy: missing\n    command: top\n  latest:\n    image: alpine:latest\n    pull_policy: missing\n    command: top\n"
  },
  {
    "path": "pkg/e2e/fixtures/compose-pull/no-image-name-given/compose.yaml",
    "content": "services:\n  no-image-service:\n    build: .\n"
  },
  {
    "path": "pkg/e2e/fixtures/compose-pull/simple/compose.yaml",
    "content": "services:\n  simple:\n    image: alpine:3.14\n    command: top\n  another:\n    image: alpine:3.15\n    command: top\n"
  },
  {
    "path": "pkg/e2e/fixtures/compose-pull/unknown-image/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM alpine:3.15\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/compose-pull/unknown-image/compose.yaml",
    "content": "services:\n  fail:\n    image: does_not_exists\n  can_build:\n    image: doesn_t_exists_either\n    build: .\n  valid:\n    image: alpine:3.15\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/config/compose.yaml",
    "content": "services:\n  test:\n    image: test\n    ports:\n      - ${PORT:-8080}:80"
  },
  {
    "path": "pkg/e2e/fixtures/configs/compose.yaml",
    "content": "services:\n  from_env:\n    image: alpine\n    configs:\n      - source: from_env\n    command: cat /from_env\n\n  from_file:\n    image: alpine\n    configs:\n      - source: from_file\n    command: cat /from_file\n\n  inlined:\n    image: alpine\n    configs:\n      - source: inlined\n    command: cat /inlined\n\n  target:\n    image: alpine\n    configs:\n      - source: inlined\n        target: /target\n    command: cat /target\n\nconfigs:\n  from_env:\n    environment: CONFIG\n  from_file:\n    file: config.txt\n  inlined:\n    content: This is my $CONFIG\n"
  },
  {
    "path": "pkg/e2e/fixtures/configs/config.txt",
    "content": "This is my config file\n"
  },
  {
    "path": "pkg/e2e/fixtures/container_name/compose.yaml",
    "content": "services:\n  test:\n    image: alpine\n    container_name: test\n    command: /bin/true\n\n  another_test:\n    image: alpine\n    container_name: test\n    command: /bin/true\n"
  },
  {
    "path": "pkg/e2e/fixtures/cp-test/compose.yaml",
    "content": "services:\n  nginx:\n    image:  nginx:alpine"
  },
  {
    "path": "pkg/e2e/fixtures/cp-test/cp-folder/cp-me.txt",
    "content": "hello world from folder"
  },
  {
    "path": "pkg/e2e/fixtures/cp-test/cp-me.txt",
    "content": "hello world"
  },
  {
    "path": "pkg/e2e/fixtures/dependencies/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM busybox:1.35.0\nRUN echo \"hello\""
  },
  {
    "path": "pkg/e2e/fixtures/dependencies/compose.yaml",
    "content": "services:\n  foo:\n    image: nginx:alpine\n    command: \"${COMMAND}\"\n    depends_on:\n      - bar\n\n  bar:\n    image: nginx:alpine\n    scale: 2\n"
  },
  {
    "path": "pkg/e2e/fixtures/dependencies/dependency-exit.yaml",
    "content": "services:\n  web:\n    image: nginx:alpine\n    depends_on:\n      db:\n        condition: service_healthy\n  db:\n    image: alpine\n    command: sh -c \"exit 1\"\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/dependencies/deps-completed-successfully.yaml",
    "content": "services:\n  oneshot:\n    image: alpine\n    command: echo 'hello world'\n  longrunning:\n    image: alpine\n    init: true\n    depends_on:\n      oneshot:\n        condition: service_completed_successfully\n    command: sleep infinity\n"
  },
  {
    "path": "pkg/e2e/fixtures/dependencies/deps-not-required.yaml",
    "content": "services:\n  foo:\n    image: bash\n    command: echo \"foo\"\n    depends_on:\n      bar:\n        required: false\n        condition: service_healthy\n  bar:\n    image: nginx:alpine\n    profiles: [not-required]"
  },
  {
    "path": "pkg/e2e/fixtures/dependencies/recreate-no-deps.yaml",
    "content": "version: '3.8'\nservices:\n  my-service:\n    image: alpine\n    command: tail -f /dev/null\n    init: true\n    depends_on:\n      nginx: {condition: service_healthy}\n\n  nginx:\n    image: nginx:alpine\n    stop_signal: SIGTERM\n    healthcheck:\n      test:     \"echo | nc -w 5 localhost:80\"\n      interval: 2s\n      timeout:  1s\n      retries:  10\n"
  },
  {
    "path": "pkg/e2e/fixtures/dependencies/service-image-depends-on.yaml",
    "content": "services:\n  foo:\n    image: built-image-dependency\n    build:\n      context: .\n  bar:\n    image: built-image-dependency\n    depends_on:\n      - foo\n"
  },
  {
    "path": "pkg/e2e/fixtures/dotenv/development/compose.yaml",
    "content": "services:\n  backend:\n    image: $IMAGE_NAME:$IMAGE_TAG\n  test:\n    profiles:\n      - test\n    image: enabled:profile"
  },
  {
    "path": "pkg/e2e/fixtures/dotenv/raw.yaml",
    "content": "services:\n  test:\n    image: alpine\n    command: sh -c \"echo $$TEST_VAR\"\n    env_file:\n      - path: .env.raw\n        format: raw # parse without interpolation"
  },
  {
    "path": "pkg/e2e/fixtures/env-secret/child/compose.yaml",
    "content": "services:\n  included:\n    image: alpine\n    secrets:\n      - my-secret\n    command: cat /run/secrets/my-secret\n\nsecrets:\n  my-secret:\n    environment: 'MY_SECRET'\n"
  },
  {
    "path": "pkg/e2e/fixtures/env-secret/compose.yaml",
    "content": "include:\n  - path: child/compose.yaml\n    env_file:\n      - secret.env\n\nservices:\n  foo:\n    image: alpine\n    secrets:\n      - source: secret\n        target: bar\n        uid: \"1005\"\n        gid: \"1005\"\n        mode: 0440\n    command: cat /run/secrets/bar\n\nsecrets:\n  secret:\n    environment: SECRET\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/env-secret/secret.env",
    "content": "MY_SECRET='this-is-secret'\n"
  },
  {
    "path": "pkg/e2e/fixtures/env_file/compose.yaml",
    "content": "services:\n  serviceA:\n    image: nginx:latest\n\n  serviceB:\n    image: nginx:latest\n    env_file:\n      - /doesnotexist/.env\n\n  serviceC:\n    profiles: [\"test\"]\n    image: alpine\n    env_file: test.env\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/env_file/test.env",
    "content": "FOO=BAR"
  },
  {
    "path": "pkg/e2e/fixtures/environment/empty-variable/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM alpine\nENV  EMPTY=not_empty\nCMD [\"sh\", \"-c\", \"echo \\\"=$EMPTY=\\\"\"]\n"
  },
  {
    "path": "pkg/e2e/fixtures/environment/empty-variable/compose.yaml",
    "content": "services:\n  empty-variable:\n    build:\n      context: .\n    image: empty-variable\n    environment:\n      - EMPTY # expect to propagate value from user's env OR unset in container"
  },
  {
    "path": "pkg/e2e/fixtures/environment/env-file-comments/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM alpine\nENV  COMMENT=Dockerfile\nENV  NO_COMMENT=Dockerfile\nCMD [\"sh\", \"-c\", \"printenv\", \"|\", \"grep\", \"COMMENT\"]\n"
  },
  {
    "path": "pkg/e2e/fixtures/environment/env-file-comments/compose.yaml",
    "content": "services:\n  env-file-comments:\n    build:\n      context: .\n    image: env-file-comments"
  },
  {
    "path": "pkg/e2e/fixtures/environment/env-interpolation/compose.yaml",
    "content": "services:\n  env-interpolation:\n    image: bash\n    environment:\n      IMAGE: ${IMAGE}\n    command: echo \"$IMAGE\""
  },
  {
    "path": "pkg/e2e/fixtures/environment/env-interpolation-default-value/compose.yaml",
    "content": "services:\n  env-interpolation:\n    image: bash\n    environment:\n      IMAGE: ${IMAGE}\n    command: echo \"$IMAGE\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/environment/env-priority/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM alpine\nENV  WHEREAMI=Dockerfile\nCMD [\"printenv\", \"WHEREAMI\"]\n"
  },
  {
    "path": "pkg/e2e/fixtures/environment/env-priority/compose-with-env-file.yaml",
    "content": "services:\n  env-compose-priority:\n    image: env-compose-priority\n    build:\n      context: .\n    env_file:\n      - .env.override\n"
  },
  {
    "path": "pkg/e2e/fixtures/environment/env-priority/compose-with-env.yaml",
    "content": "services:\n  env-compose-priority:\n    image: env-compose-priority\n    build:\n      context: .\n    environment:\n      WHEREAMI: \"Compose File\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/environment/env-priority/compose.yaml",
    "content": "services:\n  env-compose-priority:\n    image: env-compose-priority\n    build:\n      context: .\n"
  },
  {
    "path": "pkg/e2e/fixtures/exec/compose.yaml",
    "content": "services:\n  test:\n    image: alpine\n    command: tail -f /dev/null"
  },
  {
    "path": "pkg/e2e/fixtures/export/compose.yaml",
    "content": "services:\n  service:\n    image: alpine\n    command: sleep infinity\n  service-with-replicas:\n    image: alpine\n    command: sleep infinity\n    deploy:\n      replicas: 3"
  },
  {
    "path": "pkg/e2e/fixtures/external/compose.yaml",
    "content": "services:\n  test:\n    image: nginx:alpine\n\n  other:\n    image: nginx:alpine\n    networks:\n      test_network:\n        ipv4_address: 8.8.8.8\n\nnetworks:\n  test_network:\n    external: true\n    name: foo_bar"
  },
  {
    "path": "pkg/e2e/fixtures/hooks/compose.yaml",
    "content": "services:\n  sample:\n    image: nginx\n    volumes:\n      - data:/data\n    pre_stop:\n      - command: sh -c 'echo \"In the pre-stop\" >> /data/log.txt'\n  test:\n    image: nginx\n    post_start:\n      - command: sh -c 'echo env'\nvolumes:\n  data:\n    name: sample-data"
  },
  {
    "path": "pkg/e2e/fixtures/hooks/poststart/compose-error.yaml",
    "content": "services:\n\n  test:\n    image: nginx\n    post_start:\n      - command: sh -c 'command in error'\n"
  },
  {
    "path": "pkg/e2e/fixtures/hooks/poststart/compose-success.yaml",
    "content": "services:\n  test:\n    image: nginx\n    post_start:\n      - command: sh -c 'echo env'\n"
  },
  {
    "path": "pkg/e2e/fixtures/hooks/prestop/compose-error.yaml",
    "content": "services:\n  sample:\n    image: nginx\n    volumes:\n      - data:/data\n    pre_stop:\n      - command: sh -c 'command in error'\nvolumes:\n  data:\n    name: sample-data"
  },
  {
    "path": "pkg/e2e/fixtures/hooks/prestop/compose-success.yaml",
    "content": "services:\n  sample:\n    image: nginx\n    volumes:\n      - data:/data\n    pre_stop:\n      - command: sh -c 'echo \"In the pre-stop\" >> /data/log.txt'\nvolumes:\n  data:\n    name: sample-data"
  },
  {
    "path": "pkg/e2e/fixtures/image-volume-recreate/Dockerfile",
    "content": "# syntax=docker/dockerfile:1\n#\n#    Copyright 2020 Docker Compose CLI authors\n#\n#    Licensed under the Apache License, Version 2.0 (the \"License\");\n#    you may not use this file except in compliance with the License.\n#    You may obtain a copy of the License at\n#\n#        http://www.apache.org/licenses/LICENSE-2.0\n#\n#    Unless required by applicable law or agreed to in writing, software\n#    distributed under the License is distributed on an \"AS IS\" BASIS,\n#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#    See the License for the specific language governing permissions and\n#    limitations under the License.\n#\n\nFROM alpine\nWORKDIR /app\nARG CONTENT=initial\nRUN echo \"$CONTENT\" > /app/content.txt\n"
  },
  {
    "path": "pkg/e2e/fixtures/image-volume-recreate/compose.yaml",
    "content": "services:\n  source:\n    build:\n      context: .\n      dockerfile: Dockerfile\n    image: image-volume-source\n\n  consumer:\n    image: alpine\n    depends_on:\n      - source\n    command: [\"cat\", \"/data/content.txt\"]\n    volumes:\n      - type: image\n        source: image-volume-source\n        target: /data\n        image:\n          subpath: app\n"
  },
  {
    "path": "pkg/e2e/fixtures/init-container/compose.yaml",
    "content": "services:\n  foo:\n    image: alpine\n    command: \"echo hello\"\n\n  bar:\n    image: alpine\n    command: \"echo world\"\n    depends_on:\n      foo:\n        condition: \"service_completed_successfully\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/ipam/compose.yaml",
    "content": "services:\n  foo:\n    image: alpine\n    init: true\n    entrypoint: [\"sleep\", \"600\"]\n    networks:\n      default:\n        ipv4_address: 10.1.0.100  # <-- Fixed IP address\nnetworks:\n  default:\n    ipam:\n      config:\n        - subnet: 10.1.0.0/16\n"
  },
  {
    "path": "pkg/e2e/fixtures/ipc-test/compose.yaml",
    "content": "services:\n  service:\n    image: alpine\n    command: top\n    ipc: \"service:shareable\"\n  container:\n    image: alpine\n    command: top\n    ipc: \"container:ipc_mode_container\"\n  shareable:\n    image: alpine\n    command: top\n    ipc: shareable\n"
  },
  {
    "path": "pkg/e2e/fixtures/links/compose.yaml",
    "content": "services:\n  foo:\n    image: nginx:alpine\n    links:\n      - bar\n\n  bar:\n    image: nginx:alpine\n"
  },
  {
    "path": "pkg/e2e/fixtures/logging-driver/compose.yaml",
    "content": "services:\n  fluentbit:\n    image: fluent/fluent-bit:3.1.7-debug\n    ports:\n      - \"24224:24224\"\n      - \"24224:24224/udp\"\n    environment:\n      FOO: ${BAR}\n\n  app:\n    image: nginx\n    depends_on:\n      fluentbit:\n        condition: service_started\n        restart: true\n    logging:\n      driver: fluentd\n      options:\n        fluentd-address: ${HOST:-127.0.0.1}:24224\n"
  },
  {
    "path": "pkg/e2e/fixtures/logs-test/cat.yaml",
    "content": "services:\n  test:\n    image: alpine\n    command: cat /text_file.txt\n    volumes:\n      - ${FILE}:/text_file.txt\n"
  },
  {
    "path": "pkg/e2e/fixtures/logs-test/compose.yaml",
    "content": "services:\n  ping:\n    image: alpine\n    init: true\n    command: ping localhost -c ${REPEAT:-1}\n  hello:\n    image: alpine\n    command: echo hello\n    deploy:\n      replicas: 2\n"
  },
  {
    "path": "pkg/e2e/fixtures/logs-test/restart.yaml",
    "content": "services:\n  ping:\n    image: alpine\n    command: \"sh -c 'ping -c 2 localhost && exit 1'\"\n    restart: \"on-failure:2\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/model/compose.yaml",
    "content": "services:\n  test:\n    image: alpine/curl\n    models:\n      - foo\n\nmodels:\n  foo:\n    model: ai/smollm2\n"
  },
  {
    "path": "pkg/e2e/fixtures/nested/compose.yaml",
    "content": "services:\n  echo:\n    image: alpine\n    command: echo $ROOT $SUB win=$WIN\n"
  },
  {
    "path": "pkg/e2e/fixtures/network-alias/compose.yaml",
    "content": "services:\n\n  container1:\n    image: nginx\n    links:\n      - container2:container\n\n  container2:\n    image: nginx\n    networks:\n      default:\n        aliases:\n          - alias-of-container2\n"
  },
  {
    "path": "pkg/e2e/fixtures/network-interface-name/compose.yaml",
    "content": "services:\n  test:\n    image: alpine\n    command: ip link show\n    networks:\n      default:\n        interface_name: foobar\n"
  },
  {
    "path": "pkg/e2e/fixtures/network-links/compose.yaml",
    "content": "services:\n  container1:\n    image: nginx\n    network_mode: bridge\n  container2:\n    image: nginx\n    network_mode: bridge\n    links:\n      - container1\n"
  },
  {
    "path": "pkg/e2e/fixtures/network-recreate/compose.yaml",
    "content": "services:\n  web:\n    image: nginx\n    networks:\n      - test\n\nnetworks:\n  test:\n    labels:\n      - foo=${FOO:-foo}"
  },
  {
    "path": "pkg/e2e/fixtures/network-test/compose.subnet.yaml",
    "content": "services:\n  test:\n    image: nginx:alpine\n    networks:\n      - test\n\nnetworks:\n  test:\n    ipam:\n      config:\n        - subnet: ${SUBNET-172.99.0.0/16}\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/network-test/compose.yaml",
    "content": "services:\n  mydb:\n    image: mariadb\n    network_mode: \"service:db\"\n    environment:\n      - MYSQL_ALLOW_EMPTY_PASSWORD=yes\n  db:\n    image: gtardif/sentences-db\n    init: true\n    networks:\n      - dbnet\n      - closesnetworkname1\n      - closesnetworkname2\n  words:\n    image: gtardif/sentences-api\n    init: true\n    ports:\n      - \"8080:8080\"\n    networks:\n      - dbnet\n      - servicenet\n  web:\n    image: gtardif/sentences-web\n    init: true\n    ports:\n      - \"80:80\"\n    labels:\n      - \"my-label=test\"\n    networks:\n      - servicenet\n\nnetworks:\n  dbnet:\n  servicenet:\n    name: microservices\n  closesnetworkname1:\n    name: closenamenet\n  closesnetworkname2:\n    name: closenamenet-2\n"
  },
  {
    "path": "pkg/e2e/fixtures/network-test/mac_address.yaml",
    "content": "services:\n  test:\n    image: nginx:alpine\n    mac_address: 00:e0:84:35:d0:e8\n"
  },
  {
    "path": "pkg/e2e/fixtures/no-deps/network-mode.yaml",
    "content": "services:\n  app:\n    image: nginx:alpine\n    network_mode: service:db\n\n  db:\n    image: nginx:alpine\n"
  },
  {
    "path": "pkg/e2e/fixtures/no-deps/volume-from.yaml",
    "content": "services:\n  app:\n    image: nginx:alpine\n    volumes_from:\n      - db\n\n  db:\n    image: nginx:alpine\n    volumes:\n      - /var/data\n"
  },
  {
    "path": "pkg/e2e/fixtures/orphans/compose.yaml",
    "content": "services:\n  orphan:\n    profiles: [run]\n    image: alpine\n    command: echo hello\n  test:\n    image: nginx:alpine\n"
  },
  {
    "path": "pkg/e2e/fixtures/pause/compose.yaml",
    "content": "services:\n  a:\n    image:  nginx:alpine\n    ports: [80]\n    healthcheck:\n      test: wget --spider -S -T1 http://localhost:80\n      interval: 1s\n      timeout: 1s\n  b:\n    image:  nginx:alpine\n    ports: [80]\n    depends_on:\n      - a\n    healthcheck:\n      test: wget --spider -S -T1 http://localhost:80\n      interval: 1s\n      timeout: 1s\n"
  },
  {
    "path": "pkg/e2e/fixtures/port-range/compose.yaml",
    "content": "services:\n  a:\n    image: nginx:alpine\n    scale: 5\n    ports:\n      - \"6005-6015:80\"\n\n  b:\n    image: nginx:alpine\n    ports:\n      - 80\n\n  c:\n    image: nginx:alpine\n    ports:\n      - 80"
  },
  {
    "path": "pkg/e2e/fixtures/profiles/compose.yaml",
    "content": "services:\n  regular-service:\n    image: nginx:alpine\n\n  profiled-service:\n    image: nginx:alpine\n    profiles:\n      - test-profile"
  },
  {
    "path": "pkg/e2e/fixtures/profiles/docker-compose.yaml",
    "content": "services:\n  foo:\n    container_name: foo_c\n    profiles: [ test ]\n    image: alpine\n    depends_on: [ db ]\n\n  bar:\n    container_name: bar_c\n    profiles: [ test ]\n    image: alpine\n  \n  db:\n    container_name: db_c\n    image: alpine"
  },
  {
    "path": "pkg/e2e/fixtures/profiles/test-profile.env",
    "content": "COMPOSE_PROFILES=test-profile\n"
  },
  {
    "path": "pkg/e2e/fixtures/project-volume-bind-test/docker-compose.yml",
    "content": "services:\n  frontend:\n    image: nginx\n    container_name: frontend\n    volumes:\n      - project-data:/data\n\nvolumes:\n  project-data:\n    driver: local\n    driver_opts:\n      type: none\n      o: bind\n      device: \"${TEST_DIR}\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/providers/depends-on-multiple-providers.yaml",
    "content": "services:\n  test:\n    image: alpine\n    command: env\n    depends_on:\n      - provider1\n      - provider2\n  provider1:\n    provider:\n      type: example-provider\n      options:\n        name: provider1\n        type: test1\n        size: 1\n  provider2:\n    provider:\n      type: example-provider\n      options:\n        name: provider2\n        type: test2\n        size: 2\n"
  },
  {
    "path": "pkg/e2e/fixtures/ps-test/compose.yaml",
    "content": "services:\n  nginx:\n    image: nginx:latest\n    expose:\n    - '80'\n    - '443'\n    - '8080'\n  busybox:\n    image: busybox\n    command: busybox httpd -f -p 8000\n    ports:\n    - '127.0.0.1:8001:8000'\n"
  },
  {
    "path": "pkg/e2e/fixtures/publish/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM alpine:latest"
  },
  {
    "path": "pkg/e2e/fixtures/publish/common.yaml",
    "content": "services:\n  foo:\n    image: bar"
  },
  {
    "path": "pkg/e2e/fixtures/publish/compose-bind-mount.yml",
    "content": "services:\n  serviceA:\n    image: a\n    volumes:\n      - .:/user-data\n"
  },
  {
    "path": "pkg/e2e/fixtures/publish/compose-build-only.yml",
    "content": "services:\n  serviceA:\n    build:\n      context: .\n      dockerfile: Dockerfile\n  serviceB:\n    build:\n      context: .\n      dockerfile: Dockerfile\n"
  },
  {
    "path": "pkg/e2e/fixtures/publish/compose-env-file.yml",
    "content": "services:\n  serviceA:\n    image: \"alpine:3.12\"\n    env_file:\n      - publish.env\n  serviceB:\n    image: \"alpine:3.12\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/publish/compose-environment.yml",
    "content": "services:\n  serviceA:\n    image: \"alpine:3.12\"\n    environment:\n        - \"FOO=bar\"\n  serviceB:\n    image: \"alpine:3.12\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/publish/compose-local-include.yml",
    "content": "include:\n  - common.yaml\n\nservices:\n  test:\n    image: test"
  },
  {
    "path": "pkg/e2e/fixtures/publish/compose-multi-env-config.yml",
    "content": "services:\n  serviceA:\n    image: \"alpine:3.12\"\n    environment:\n      - \"FOO=bar\"\n  serviceB:\n    image: \"alpine:3.12\"\n    env_file:\n      - publish.env\n    environment:\n      - \"BAR=baz\""
  },
  {
    "path": "pkg/e2e/fixtures/publish/compose-sensitive.yml",
    "content": "services:\n  serviceA:\n    image: \"alpine:3.12\"\n    environment:\n      - AWS_ACCESS_KEY_ID=A3TX1234567890ABCDEF\n      - AWS_SECRET_ACCESS_KEY=aws\"12345+67890/abcdefghijklm+NOPQRSTUVWXYZ+\"\n    configs:\n      - myconfig\n  serviceB:\n    image: \"alpine:3.12\"\n    env_file:\n      - publish-sensitive.env\n    secrets:\n      - mysecret\nconfigs:\n  myconfig:\n    file: config.txt\nsecrets:\n  mysecret:\n    file: secret.txt"
  },
  {
    "path": "pkg/e2e/fixtures/publish/compose-with-extends.yml",
    "content": "services:\n  test:\n    extends:\n      file: common.yaml\n      service: foo\n"
  },
  {
    "path": "pkg/e2e/fixtures/publish/config.txt",
    "content": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw"
  },
  {
    "path": "pkg/e2e/fixtures/publish/oci/compose-override.yaml",
    "content": "services:\n  app:\n    env_file: test.env"
  },
  {
    "path": "pkg/e2e/fixtures/publish/oci/compose.yaml",
    "content": "services:\n  app:\n    extends:\n      file: extends.yaml\n      service: test"
  },
  {
    "path": "pkg/e2e/fixtures/publish/oci/extends.yaml",
    "content": "services:\n  test:\n    image: alpine"
  },
  {
    "path": "pkg/e2e/fixtures/publish/oci/test.env",
    "content": "HELLO=WORLD"
  },
  {
    "path": "pkg/e2e/fixtures/publish/publish-sensitive.env",
    "content": "GITHUB_TOKEN=ghp_1234567890abcdefghijklmnopqrstuvwxyz\n"
  },
  {
    "path": "pkg/e2e/fixtures/publish/publish.env",
    "content": "FOO=bar\nQUIX="
  },
  {
    "path": "pkg/e2e/fixtures/publish/secret.txt",
    "content": "-----BEGIN DSA PRIVATE KEY-----\nwxyz+ABC=\n-----END DSA PRIVATE KEY-----"
  },
  {
    "path": "pkg/e2e/fixtures/recreate-volumes/bind.yaml",
    "content": "services:\n  app:\n    image: alpine\n    volumes:\n      - .:/my_vol\n"
  },
  {
    "path": "pkg/e2e/fixtures/recreate-volumes/compose.yaml",
    "content": "services:\n  app:\n    image: alpine\n    volumes:\n      - my_vol:/my_vol\n\nvolumes:\n  my_vol:\n    labels:\n      foo: bar\n"
  },
  {
    "path": "pkg/e2e/fixtures/recreate-volumes/compose2.yaml",
    "content": "services:\n  app:\n    image: alpine\n    volumes:\n      - my_vol:/my_vol\n\nvolumes:\n  my_vol:\n    labels:\n      foo: zot\n"
  },
  {
    "path": "pkg/e2e/fixtures/resources/compose.yaml",
    "content": "volumes:\n  my_vol: {}\n\nnetworks:\n  my_net: {}"
  },
  {
    "path": "pkg/e2e/fixtures/restart-test/compose-depends-on.yaml",
    "content": "services:\n  with-restart:\n    image: nginx:alpine\n    init: true\n    command: tail -f /dev/null\n    stop_signal: SIGTERM\n    depends_on:\n      nginx: {condition: service_healthy, restart: true}\n\n  no-restart:\n    image: nginx:alpine\n    init: true\n    command: tail -f /dev/null\n    stop_signal: SIGTERM\n    depends_on:\n      nginx: { condition: service_healthy }\n\n  nginx:\n    image: nginx:alpine\n    labels:\n      TEST: ${LABEL:-test}\n    stop_signal: SIGTERM\n    healthcheck:\n      test:     \"echo | nc -w 5 localhost:80\"\n      interval: 2s\n      timeout:  1s\n      retries:  10\n"
  },
  {
    "path": "pkg/e2e/fixtures/restart-test/compose.yaml",
    "content": "services:\n  restart:\n    image: alpine\n    init: true\n    command: ash -c \"if [[ -f /tmp/restart.lock ]] ; then sleep infinity; else touch /tmp/restart.lock; fi\"\n\n  test:\n    profiles:\n      - test\n    image: alpine\n    init: true\n    command: ash -c \"if [[ -f /tmp/restart.lock ]] ; then sleep infinity; else touch /tmp/restart.lock; fi\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/run-test/build-once-nested.yaml",
    "content": "services:\n  # Database service with build\n  db:\n    pull_policy: build\n    build:\n      dockerfile_inline: |\n        FROM alpine\n        RUN echo \"DB built at $(date)\" > /db-build.txt\n        CMD sleep 3600\n\n  # API service that depends on db\n  api:\n    pull_policy: build\n    build:\n      dockerfile_inline: |\n        FROM alpine\n        RUN echo \"API built at $(date)\" > /api-build.txt\n        CMD sleep 3600\n    depends_on:\n      - db\n\n  # App service that depends on api (which depends on db)\n  app:\n    pull_policy: build\n    build:\n      dockerfile_inline: |\n        FROM alpine\n        RUN echo \"App built at $(date)\" > /app-build.txt\n        CMD echo \"App running\"\n    depends_on:\n      - api\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/run-test/build-once-no-deps.yaml",
    "content": "services:\n  # Simple service with no dependencies\n  simple:\n    pull_policy: build\n    build:\n      dockerfile_inline: |\n        FROM alpine\n        RUN echo \"Simple built at $(date)\" > /build.txt\n        CMD echo \"Simple service\"\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/run-test/build-once.yaml",
    "content": "services:\n  # Service with pull_policy: build to ensure it always rebuilds\n  # This is the key to testing the bug - without the fix, this would build twice\n  nginx:\n    pull_policy: build\n    build:\n      dockerfile_inline: |\n        FROM alpine\n        RUN echo \"Nginx built at $(date)\" > /build-time.txt\n        CMD sleep 3600\n\n  # Service that depends on nginx\n  curl:\n    image: alpine\n    depends_on:\n      - nginx\n    command: echo \"curl service\"\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/run-test/compose.yaml",
    "content": "services:\n  back:\n    image: alpine\n    command: echo \"Hello there!!\"\n    depends_on:\n      - db\n    networks:\n      - backnet\n  db:\n    image: nginx:alpine\n    networks:\n      - backnet\n    volumes:\n      - data:/test\n  front:\n    image: nginx:alpine\n    networks:\n      - frontnet\n  build:\n    build:\n      dockerfile_inline: \"FROM base\"\n      additional_contexts:\n        base: \"service:build_base\"\n  build_base:\n    build:\n      dockerfile_inline: \"FROM alpine\"\nnetworks:\n  frontnet:\n  backnet:\nvolumes:\n  data:\n"
  },
  {
    "path": "pkg/e2e/fixtures/run-test/deps.yaml",
    "content": "services:\n  service_a:\n    image: bash\n    command: echo \"a\"\n    depends_on:\n      - shared_dep\n  service_b:\n    image: bash\n    command: echo \"b\"\n    depends_on:\n      - shared_dep\n  shared_dep:\n    image: bash"
  },
  {
    "path": "pkg/e2e/fixtures/run-test/orphan.yaml",
    "content": "services:\n  simple:\n    image: alpine\n    command: echo \"Hi there!!\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/run-test/piped-test.yaml",
    "content": "services:\n  piped-test:\n    image: alpine\n    command: cat\n    # Service that will receive piped input and echo it back\n  tty-test:\n    image: alpine\n    command: sh -c \"if [ -t 0 ]; then echo 'TTY detected'; else echo 'No TTY detected'; fi\"\n    # Service to test TTY detection"
  },
  {
    "path": "pkg/e2e/fixtures/run-test/ports.yaml",
    "content": "services:\n  back:\n    image: alpine\n    ports:\n      - 8082:80\n"
  },
  {
    "path": "pkg/e2e/fixtures/run-test/pull.yaml",
    "content": "services:\n  backend:\n    image: nginx\n    command: nginx -t"
  },
  {
    "path": "pkg/e2e/fixtures/run-test/quiet-pull.yaml",
    "content": "services:\n  backend:\n    image: hello-world"
  },
  {
    "path": "pkg/e2e/fixtures/run-test/run.env",
    "content": "FOO=BAR\n"
  },
  {
    "path": "pkg/e2e/fixtures/scale/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM nginx:alpine\nARG FOO\nLABEL FOO=$FOO"
  },
  {
    "path": "pkg/e2e/fixtures/scale/build.yaml",
    "content": "services:\n  test:\n    build: .\n"
  },
  {
    "path": "pkg/e2e/fixtures/scale/compose.yaml",
    "content": "services:\n  back:\n    image: nginx:alpine\n    depends_on:\n      - db\n  db:\n    image: nginx:alpine\n    environment:\n      - MAYBE\n  front:\n    image: nginx:alpine\n    deploy:\n      replicas: 2\n  dbadmin:\n    image: nginx:alpine\n    deploy:\n      replicas: 0"
  },
  {
    "path": "pkg/e2e/fixtures/sentences/compose.yaml",
    "content": "services:\n  db:\n    image: gtardif/sentences-db\n    init: true\n  words:\n    image: gtardif/sentences-api\n    init: true\n    ports:\n      - \"95:8080\"\n  web:\n    image: gtardif/sentences-web\n    init: true\n    ports:\n      - \"90:80\"\n    labels:\n      - \"my-label=test\"\n    healthcheck:\n      test: [\"CMD\", \"curl\", \"-f\", \"http://localhost:80/\"]\n      interval: 2s\n"
  },
  {
    "path": "pkg/e2e/fixtures/simple-build-test/compose-interpolate.yaml",
    "content": "services:\n  nginx:\n    build:\n      context: nginx-build\n      dockerfile: ${MYVAR}\n"
  },
  {
    "path": "pkg/e2e/fixtures/simple-build-test/compose.yaml",
    "content": "services:\n  nginx:\n    build:\n      context: nginx-build\n      dockerfile: Dockerfile\n"
  },
  {
    "path": "pkg/e2e/fixtures/simple-build-test/nginx-build/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM nginx:alpine\n\nARG FOO\nLABEL FOO=$FOO\nCOPY static /usr/share/nginx/html\n"
  },
  {
    "path": "pkg/e2e/fixtures/simple-build-test/nginx-build/static/index.html",
    "content": "<!doctype html>\n<html lang=\"en\">\n<head>\n  <meta charset=\"utf-8\">\n  <title>Docker Nginx</title>\n</head>\n<body>\n  <h2>Hello from Nginx container</h2>\n</body>\n</html>\n"
  },
  {
    "path": "pkg/e2e/fixtures/simple-composefile/compose.yaml",
    "content": "services:\n  simple:\n    image: alpine\n    command: top\n  another:\n    image: alpine\n    command: top\n"
  },
  {
    "path": "pkg/e2e/fixtures/simple-composefile/id.yaml",
    "content": "services:\n  test:\n    image: ${ID:?ID variable must be set}\n"
  },
  {
    "path": "pkg/e2e/fixtures/start-fail/compose.yaml",
    "content": "services:\n  fail:\n    image: alpine\n    init: true\n    command: sleep infinity\n    healthcheck:\n      test: \"false\"\n      interval: 1s\n      retries: 3\n  depends:\n    image: alpine\n    init: true\n    command: sleep infinity\n    depends_on:\n      fail:\n        condition: service_healthy\n"
  },
  {
    "path": "pkg/e2e/fixtures/start-fail/start-depends_on-long-lived.yaml",
    "content": "services:\n  safe:\n    image: 'alpine'\n    init: true\n    command: ['/bin/sh', '-c', 'sleep infinity']  # never exiting\n  failure:\n    image: 'alpine'\n    init: true\n    command: ['/bin/sh', '-c', 'sleep 1 ; echo \"exiting with error\" ; exit 42']\n  test:\n    image: 'alpine'\n    init: true\n    command: ['/bin/sh', '-c', 'sleep 99999 ; echo \"tests are OK\"']  # very long job\n    depends_on: [safe]\n"
  },
  {
    "path": "pkg/e2e/fixtures/start-stop/compose.yaml",
    "content": "services:\n  simple:\n    image:  nginx:alpine\n  another:\n    image:  nginx:alpine\n"
  },
  {
    "path": "pkg/e2e/fixtures/start-stop/other.yaml",
    "content": "services:\n  a-different-one:\n    image:  nginx:alpine\n  and-another-one:\n    image:  nginx:alpine\n"
  },
  {
    "path": "pkg/e2e/fixtures/start-stop/start-stop-deps.yaml",
    "content": "services:\n  another_2:\n    image:  nginx:alpine\n  another:\n    image:  nginx:alpine\n    depends_on:\n    - another_2\n  dep_2:\n    image:  nginx:alpine\n  dep_1:\n    image:  nginx:alpine\n    depends_on:\n    - dep_2\n  desired:\n    image:  nginx:alpine\n    depends_on:\n    - dep_1\n"
  },
  {
    "path": "pkg/e2e/fixtures/start_interval/compose.yaml",
    "content": "services:\n  test:\n    image: \"nginx\"\n    healthcheck:\n      interval: 30s\n      start_period: 10s\n      start_interval: 1s\n      test: \"/bin/true\"\n\n  error:\n    image: \"nginx\"\n    healthcheck:\n      interval: 30s\n      start_interval: 1s\n      test: \"/bin/true\""
  },
  {
    "path": "pkg/e2e/fixtures/stdout-stderr/compose.yaml",
    "content": "services:\n  stderr:\n    image: alpine\n    init: true\n    command: /bin/ash /log_to_stderr.sh\n    volumes:\n           - ./log_to_stderr.sh:/log_to_stderr.sh\n"
  },
  {
    "path": "pkg/e2e/fixtures/stdout-stderr/log_to_stderr.sh",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\n>&2 echo \"log to stderr\"\necho \"log to stdout\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/stop/compose.yaml",
    "content": "services:\n  service1:\n    image: alpine\n    command: /bin/true\n  service2:\n    image: alpine\n    command: ping -c 2 localhost\n    pre_stop:\n      - command: echo \"stop hook running...\"\n"
  },
  {
    "path": "pkg/e2e/fixtures/switch-volumes/compose.yaml",
    "content": "services:\n  app:\n    image: alpine\n    volumes:\n      - my_vol:/my_vol\n\nvolumes:\n  my_vol:\n    external: true\n    name: test_external_volume\n"
  },
  {
    "path": "pkg/e2e/fixtures/switch-volumes/compose2.yaml",
    "content": "services:\n  app:\n    image: alpine\n    volumes:\n      - my_vol:/my_vol\n\nvolumes:\n  my_vol:\n    external: true\n    name: test_external_volume_2\n"
  },
  {
    "path": "pkg/e2e/fixtures/ups-deps-stop/compose.yaml",
    "content": "services:\n  dependency:\n    image: alpine\n    init: true\n    command: /bin/sh -c 'while true; do echo \"hello dependency\"; sleep 1; done'\n\n  app:\n    depends_on: ['dependency']\n    image: alpine\n    init: true\n    command: /bin/sh -c 'while true; do echo \"hello app\"; sleep 1; done'\n"
  },
  {
    "path": "pkg/e2e/fixtures/ups-deps-stop/orphan.yaml",
    "content": "services:\n  orphan:\n    image: alpine\n    init: true\n    command: /bin/sh -c 'while true; do echo \"hello orphan\"; sleep 1; done'\n"
  },
  {
    "path": "pkg/e2e/fixtures/volume-test/compose.yaml",
    "content": "services:\n  nginx:\n    build: nginx-build\n    volumes:\n      - ./static:/usr/share/nginx/html\n    ports:\n      - 8090:80\n\n  nginx2:\n    build: nginx-build\n    volumes:\n      - staticVol:/usr/share/nginx/html:ro\n      - /usr/src/app/node_modules\n      - otherVol:/usr/share/nginx/test\n    ports:\n      - 9090:80\n    configs:\n      - myconfig\n    secrets:\n      - mysecret\n\nvolumes:\n  staticVol:\n  otherVol:\n    name: myVolume\n\nconfigs:\n  myconfig:\n    file: ./static/index.html\n\nsecrets:\n  mysecret:\n    file: ./static/index.html"
  },
  {
    "path": "pkg/e2e/fixtures/volume-test/nginx-build/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM nginx:alpine\n"
  },
  {
    "path": "pkg/e2e/fixtures/volume-test/static/index.html",
    "content": "<!doctype html>\n<html lang=\"en\">\n<head>\n  <meta charset=\"utf-8\">\n  <title>Docker Nginx</title>\n</head>\n<body>\n  <h2>Hello from Nginx container</h2>\n</body>\n</html>\n"
  },
  {
    "path": "pkg/e2e/fixtures/volumes/compose.yaml",
    "content": "services:\n  with_image:\n    image: alpine\n    command: \"ls -al /mnt/image\"\n    volumes:\n      - type: image\n        source: nginx:alpine\n        target: /mnt/image\n        image:\n          subpath: usr/share/nginx/html/"
  },
  {
    "path": "pkg/e2e/fixtures/wait/compose.yaml",
    "content": "services:\n  faster:\n    image: alpine\n    command: sleep 2\n  slower:\n    image: alpine\n    command: sleep 5\n  infinity:\n    image: alpine\n    command: sleep infinity\n\n"
  },
  {
    "path": "pkg/e2e/fixtures/watch/compose.yaml",
    "content": "x-dev: &x-dev\n  watch:\n    - action: sync\n      path: ./data\n      target: /app/data\n      ignore:\n        - '*.foo'\n        - ./ignored\n    - action: sync+restart\n      path: ./config\n      target: /app/config\n\nservices:\n  alpine:\n    build:\n      dockerfile_inline: |-\n        FROM alpine\n        RUN mkdir -p /app/data\n        RUN mkdir -p /app/config\n    init: true\n    command: sleep infinity\n    develop: *x-dev\n  busybox:\n    build:\n      dockerfile_inline: |-\n        FROM busybox\n        RUN mkdir -p /app/data\n        RUN mkdir -p /app/config\n    init: true\n    command: sleep infinity\n    develop: *x-dev\n  debian:\n    build:\n      dockerfile_inline: |-\n        FROM debian\n        RUN mkdir -p /app/data\n        RUN mkdir -p /app/config\n    init: true\n    command: sleep infinity\n    volumes:\n      - ./dat:/app/dat\n      - ./data-logs:/app/data-logs\n    develop: *x-dev\n"
  },
  {
    "path": "pkg/e2e/fixtures/watch/config/file.config",
    "content": "This is a config file\n"
  },
  {
    "path": "pkg/e2e/fixtures/watch/data/hello.txt",
    "content": "hello world"
  },
  {
    "path": "pkg/e2e/fixtures/watch/data-logs/server.log",
    "content": "[INFO] Server started successfully on port 8080\n"
  },
  {
    "path": "pkg/e2e/fixtures/watch/exec.yaml",
    "content": "services:\n  test:\n    build:\n      dockerfile_inline: FROM alpine\n    command: ping localhost\n    volumes:\n      - /data\n    develop:\n      watch:\n        - path: .\n          target: /data\n          initial_sync: true\n          action: sync+exec\n          exec:\n            command: echo \"SUCCESS\""
  },
  {
    "path": "pkg/e2e/fixtures/watch/include.yaml",
    "content": "services:\n  a:\n    build:\n      dockerfile_inline: |\n        FROM nginx\n        RUN mkdir /data/\n    develop:\n      watch:\n        - path: .\n          include: A.*\n          target: /data/\n          action: sync\n"
  },
  {
    "path": "pkg/e2e/fixtures/watch/rebuild.yaml",
    "content": "services:\n  a:\n    build:\n      dockerfile_inline: |\n        FROM nginx\n        RUN mkdir /data\n        COPY test /data/a\n    develop:\n      watch:\n        - path: test\n          action: rebuild\n  b:\n    build:\n      dockerfile_inline: |\n        FROM nginx\n        RUN mkdir /data\n        COPY test /data/b\n    develop:\n      watch:\n        - path: test\n          action: rebuild\n  c:\n    build:\n      dockerfile_inline: |\n        FROM nginx\n        RUN mkdir /data\n        COPY test /data/c\n    develop:\n      watch:\n        - path: test\n          action: rebuild\n"
  },
  {
    "path": "pkg/e2e/fixtures/watch/with-external-network.yaml",
    "content": "\nservices:\n  ext-alpine:\n    build:\n      dockerfile_inline: |-\n        FROM alpine\n    init: true\n    command: sleep infinity\n    develop: \n      watch:\n        - action: rebuild\n          path: .env\n    networks:\n      - external_network_test\n\nnetworks:\n  external_network_test:\n    name: e2e-watch-external_network_test\n    external: true\n"
  },
  {
    "path": "pkg/e2e/fixtures/watch/x-initialSync.yaml",
    "content": "services:\n  test:\n    build:\n      dockerfile_inline: FROM alpine\n    command: ping localhost\n    volumes:\n      - /data\n    develop:\n      watch:\n        - path: .\n          target: /data\n          action: sync+exec\n          exec:\n            command: echo \"SUCCESS\"\n          x-initialSync: true"
  },
  {
    "path": "pkg/e2e/fixtures/wrong-composefile/build-error.yml",
    "content": "services:\n  simple:\n    build:  service1\n"
  },
  {
    "path": "pkg/e2e/fixtures/wrong-composefile/compose.yaml",
    "content": "services:\n  simple:\n    image:  nginx:alpine\n    wrongField: test\n"
  },
  {
    "path": "pkg/e2e/fixtures/wrong-composefile/service1/Dockerfile",
    "content": "#   Copyright 2020 Docker Compose CLI authors\n\n#   Licensed under the Apache License, Version 2.0 (the \"License\");\n#   you may not use this file except in compliance with the License.\n#   You may obtain a copy of the License at\n\n#       http://www.apache.org/licenses/LICENSE-2.0\n\n#   Unless required by applicable law or agreed to in writing, software\n#   distributed under the License is distributed on an \"AS IS\" BASIS,\n#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#   See the License for the specific language governing permissions and\n#   limitations under the License.\n\nFROM nginx\n\nWRONG DOCKERFILE\n"
  },
  {
    "path": "pkg/e2e/fixtures/wrong-composefile/unknown-image.yml",
    "content": "services:\n  simple:\n    image:  unknownimage\n"
  },
  {
    "path": "pkg/e2e/framework.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/fs\"\n\t\"net/http\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\tcp \"github.com/otiai10/copy\"\n\t\"github.com/stretchr/testify/require\"\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n\t\"gotest.tools/v3/poll\"\n\n\t\"github.com/docker/compose/v5/cmd/compose\"\n)\n\nvar (\n\t// DockerExecutableName is the OS dependent Docker CLI binary name\n\tDockerExecutableName = \"docker\"\n\n\t// DockerComposeExecutableName is the OS dependent Docker CLI binary name\n\tDockerComposeExecutableName = \"docker-\" + compose.PluginName\n\n\t// DockerScanExecutableName is the OS dependent Docker Scan plugin binary name\n\tDockerScanExecutableName = \"docker-scan\"\n\n\t// DockerBuildxExecutableName is the Os dependent Buildx plugin binary name\n\tDockerBuildxExecutableName = \"docker-buildx\"\n\n\t// DockerModelExecutableName is the Os dependent Docker-Model plugin binary name\n\tDockerModelExecutableName = \"docker-model\"\n\n\t// WindowsExecutableSuffix is the Windows executable suffix\n\tWindowsExecutableSuffix = \".exe\"\n)\n\nfunc init() {\n\tif runtime.GOOS == \"windows\" {\n\t\tDockerExecutableName += WindowsExecutableSuffix\n\t\tDockerComposeExecutableName += WindowsExecutableSuffix\n\t\tDockerScanExecutableName += WindowsExecutableSuffix\n\t\tDockerBuildxExecutableName += WindowsExecutableSuffix\n\t}\n}\n\n// CLI is used to wrap the CLI for end to end testing\ntype CLI struct {\n\t// ConfigDir for Docker configuration (set as DOCKER_CONFIG)\n\tConfigDir string\n\n\t// HomeDir for tools that look for user files (set as HOME)\n\tHomeDir string\n\n\t// env overrides to apply to every invoked command\n\t//\n\t// To populate, use WithEnv when creating a CLI instance.\n\tenv []string\n}\n\n// CLIOption to customize behavior for all commands for a CLI instance.\ntype CLIOption func(c *CLI)\n\n// NewParallelCLI marks the parent test as parallel and returns a CLI instance\n// suitable for usage across child tests.\nfunc NewParallelCLI(t *testing.T, opts ...CLIOption) *CLI {\n\tt.Helper()\n\tt.Parallel()\n\treturn NewCLI(t, opts...)\n}\n\n// NewCLI creates a CLI instance for running E2E tests.\nfunc NewCLI(t testing.TB, opts ...CLIOption) *CLI {\n\tt.Helper()\n\n\tconfigDir := t.TempDir()\n\tcopyLocalConfig(t, configDir)\n\tinitializePlugins(t, configDir)\n\tinitializeContextDir(t, configDir)\n\n\tc := &CLI{\n\t\tConfigDir: configDir,\n\t\tHomeDir:   t.TempDir(),\n\t}\n\n\tfor _, opt := range opts {\n\t\topt(c)\n\t}\n\tc.RunDockerComposeCmdNoCheck(t, \"version\")\n\treturn c\n}\n\n// WithEnv sets environment variables that will be passed to commands.\nfunc WithEnv(env ...string) CLIOption {\n\treturn func(c *CLI) {\n\t\tc.env = append(c.env, env...)\n\t}\n}\n\nfunc copyLocalConfig(t testing.TB, configDir string) {\n\tt.Helper()\n\n\t// copy local config.json if exists\n\tlocalConfig := filepath.Join(os.Getenv(\"HOME\"), \".docker\", \"config.json\")\n\t// if no config present just continue\n\tif _, err := os.Stat(localConfig); err != nil {\n\t\t// copy the local config.json to the test config dir\n\t\tCopyFile(t, localConfig, filepath.Join(configDir, \"config.json\"))\n\t}\n}\n\n// initializePlugins copies the necessary plugin files to the temporary config\n// directory for the test.\nfunc initializePlugins(t testing.TB, configDir string) {\n\tt.Cleanup(func() {\n\t\tif t.Failed() {\n\t\t\tif conf, err := os.ReadFile(filepath.Join(configDir, \"config.json\")); err == nil {\n\t\t\t\tt.Logf(\"Config: %s\\n\", string(conf))\n\t\t\t}\n\t\t\tt.Log(\"Contents of config dir:\")\n\t\t\tfor _, p := range dirContents(configDir) {\n\t\t\t\tt.Logf(\"  - %s\", p)\n\t\t\t}\n\t\t}\n\t})\n\n\trequire.NoError(t, os.MkdirAll(filepath.Join(configDir, \"cli-plugins\"), 0o755),\n\t\t\"Failed to create cli-plugins directory\")\n\tcomposePlugin, err := findExecutable(DockerComposeExecutableName)\n\tif errors.Is(err, fs.ErrNotExist) {\n\t\tt.Logf(\"WARNING: docker-compose cli-plugin not found\")\n\t}\n\n\tif err == nil {\n\t\tCopyFile(t, composePlugin, filepath.Join(configDir, \"cli-plugins\", DockerComposeExecutableName))\n\t\tbuildxPlugin, err := findPluginExecutable(DockerBuildxExecutableName)\n\t\tif err != nil {\n\t\t\tt.Logf(\"WARNING: docker-buildx cli-plugin not found, using default buildx installation.\")\n\t\t} else {\n\t\t\tCopyFile(t, buildxPlugin, filepath.Join(configDir, \"cli-plugins\", DockerBuildxExecutableName))\n\t\t}\n\t\t// We don't need a functional scan plugin, but a valid plugin binary\n\t\tCopyFile(t, composePlugin, filepath.Join(configDir, \"cli-plugins\", DockerScanExecutableName))\n\n\t\tmodelPlugin, err := findPluginExecutable(DockerModelExecutableName)\n\t\tif err != nil {\n\t\t\tt.Logf(\"WARNING: docker-model cli-plugin not found\")\n\t\t} else {\n\t\t\tCopyFile(t, modelPlugin, filepath.Join(configDir, \"cli-plugins\", DockerModelExecutableName))\n\t\t}\n\t}\n}\n\nfunc initializeContextDir(t testing.TB, configDir string) {\n\tdockerUserDir := \".docker/contexts\"\n\tuserDir, err := os.UserHomeDir()\n\trequire.NoError(t, err, \"Failed to get user home directory\")\n\tuserContextsDir := filepath.Join(userDir, dockerUserDir)\n\tif checkExists(userContextsDir) {\n\t\tdstContexts := filepath.Join(configDir, \"contexts\")\n\t\trequire.NoError(t, cp.Copy(userContextsDir, dstContexts), \"Failed to copy contexts directory\")\n\t}\n}\n\nfunc checkExists(path string) bool {\n\t_, err := os.Stat(path)\n\treturn err == nil\n}\n\nfunc dirContents(dir string) []string {\n\tvar res []string\n\t_ = filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {\n\t\tres = append(res, path)\n\t\treturn nil\n\t})\n\treturn res\n}\n\nfunc findExecutable(executableName string) (string, error) {\n\tbin := os.Getenv(\"COMPOSE_E2E_BIN_PATH\")\n\tif bin == \"\" {\n\t\t_, filename, _, _ := runtime.Caller(0)\n\t\tbuildPath := filepath.Join(filepath.Dir(filename), \"..\", \"..\", \"bin\", \"build\")\n\t\tvar err error\n\t\tbin, err = filepath.Abs(filepath.Join(buildPath, executableName))\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t}\n\n\tif _, err := os.Stat(bin); err == nil {\n\t\treturn bin, nil\n\t}\n\treturn \"\", fmt.Errorf(\"looking for %q: %w\", bin, fs.ErrNotExist)\n}\n\nfunc findPluginExecutable(pluginExecutableName string) (string, error) {\n\tdockerUserDir := \".docker/cli-plugins\"\n\tuserDir, err := os.UserHomeDir()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tcandidates := []string{\n\t\tfilepath.Join(userDir, dockerUserDir),\n\t\t\"/usr/local/lib/docker/cli-plugins\",\n\t\t\"/usr/local/libexec/docker/cli-plugins\",\n\t\t\"/usr/lib/docker/cli-plugins\",\n\t\t\"/usr/libexec/docker/cli-plugins\",\n\t}\n\tfor _, path := range candidates {\n\t\tbin, err := filepath.Abs(filepath.Join(path, pluginExecutableName))\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\tif _, err := os.Stat(bin); err == nil {\n\t\t\treturn bin, nil\n\t\t}\n\t}\n\n\treturn \"\", fmt.Errorf(\"plugin not found %s: %w\", pluginExecutableName, os.ErrNotExist)\n}\n\n// CopyFile copies a file from a sourceFile to a destinationFile setting permissions to 0755\nfunc CopyFile(t testing.TB, sourceFile string, destinationFile string) {\n\tt.Helper()\n\n\tsrc, err := os.Open(sourceFile)\n\trequire.NoError(t, err, \"Failed to open source file: %s\")\n\t//nolint:errcheck\n\tdefer src.Close()\n\n\tdst, err := os.OpenFile(destinationFile, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0o755)\n\trequire.NoError(t, err, \"Failed to open destination file: %s\", destinationFile)\n\t//nolint:errcheck\n\tdefer dst.Close()\n\n\t_, err = io.Copy(dst, src)\n\trequire.NoError(t, err, \"Failed to copy file: %s\", sourceFile)\n}\n\n// BaseEnvironment provides the minimal environment variables used across all\n// Docker / Compose commands.\nfunc (c *CLI) BaseEnvironment() []string {\n\tenv := []string{\n\t\t\"HOME=\" + c.HomeDir,\n\t\t\"USER=\" + os.Getenv(\"USER\"),\n\t\t\"DOCKER_CONFIG=\" + c.ConfigDir,\n\t\t\"KUBECONFIG=invalid\",\n\t\t\"PATH=\" + os.Getenv(\"PATH\"),\n\t}\n\tdockerContextEnv, ok := os.LookupEnv(\"DOCKER_CONTEXT\")\n\tif ok {\n\t\tenv = append(env, \"DOCKER_CONTEXT=\"+dockerContextEnv)\n\t}\n\n\tif coverdir, ok := os.LookupEnv(\"GOCOVERDIR\"); ok {\n\t\t_, filename, _, _ := runtime.Caller(0)\n\t\troot := filepath.Join(filepath.Dir(filename), \"..\", \"..\")\n\t\tcoverdir = filepath.Join(root, coverdir)\n\t\tenv = append(env, fmt.Sprintf(\"GOCOVERDIR=%s\", coverdir))\n\t}\n\treturn env\n}\n\n// NewCmd creates a cmd object configured with the test environment set\nfunc (c *CLI) NewCmd(command string, args ...string) icmd.Cmd {\n\treturn icmd.Cmd{\n\t\tCommand: append([]string{command}, args...),\n\t\tEnv:     append(c.BaseEnvironment(), c.env...),\n\t}\n}\n\n// NewCmdWithEnv creates a cmd object configured with the test environment set with additional env vars\nfunc (c *CLI) NewCmdWithEnv(envvars []string, command string, args ...string) icmd.Cmd {\n\t// base env -> CLI overrides -> cmd overrides\n\tcmdEnv := append(c.BaseEnvironment(), c.env...)\n\tcmdEnv = append(cmdEnv, envvars...)\n\treturn icmd.Cmd{\n\t\tCommand: append([]string{command}, args...),\n\t\tEnv:     cmdEnv,\n\t}\n}\n\n// MetricsSocket get the path where test metrics will be sent\nfunc (c *CLI) MetricsSocket() string {\n\treturn filepath.Join(c.ConfigDir, \"docker-cli.sock\")\n}\n\n// NewDockerCmd creates a docker cmd without running it\nfunc (c *CLI) NewDockerCmd(t testing.TB, args ...string) icmd.Cmd {\n\tt.Helper()\n\tfor _, arg := range args {\n\t\tif arg == compose.PluginName {\n\t\t\tt.Fatal(\"This test called 'RunDockerCmd' for 'compose'. Please prefer 'RunDockerComposeCmd' to be able to test as a plugin and standalone\")\n\t\t}\n\t}\n\treturn c.NewCmd(DockerExecutableName, args...)\n}\n\n// RunDockerOrExitError runs a docker command and returns a result\nfunc (c *CLI) RunDockerOrExitError(t testing.TB, args ...string) *icmd.Result {\n\tt.Helper()\n\tt.Logf(\"\\t[%s] docker %s\\n\", t.Name(), strings.Join(args, \" \"))\n\treturn icmd.RunCmd(c.NewDockerCmd(t, args...))\n}\n\n// RunCmd runs a command, expects no error and returns a result\nfunc (c *CLI) RunCmd(t testing.TB, args ...string) *icmd.Result {\n\tt.Helper()\n\tt.Logf(\"\\t[%s] %s\\n\", t.Name(), strings.Join(args, \" \"))\n\tassert.Assert(t, len(args) >= 1, \"require at least one command in parameters\")\n\tres := icmd.RunCmd(c.NewCmd(args[0], args[1:]...))\n\tres.Assert(t, icmd.Success)\n\treturn res\n}\n\n// RunCmdInDir runs a command in a given dir, expects no error and returns a result\nfunc (c *CLI) RunCmdInDir(t testing.TB, dir string, args ...string) *icmd.Result {\n\tt.Helper()\n\tt.Logf(\"\\t[%s] %s\\n\", t.Name(), strings.Join(args, \" \"))\n\tassert.Assert(t, len(args) >= 1, \"require at least one command in parameters\")\n\tcmd := c.NewCmd(args[0], args[1:]...)\n\tcmd.Dir = dir\n\tres := icmd.RunCmd(cmd)\n\tres.Assert(t, icmd.Success)\n\treturn res\n}\n\n// RunDockerCmd runs a docker command, expects no error and returns a result\nfunc (c *CLI) RunDockerCmd(t testing.TB, args ...string) *icmd.Result {\n\tt.Helper()\n\tres := c.RunDockerOrExitError(t, args...)\n\tres.Assert(t, icmd.Success)\n\treturn res\n}\n\n// RunDockerComposeCmd runs a docker compose command, expects no error and returns a result\nfunc (c *CLI) RunDockerComposeCmd(t testing.TB, args ...string) *icmd.Result {\n\tt.Helper()\n\tres := c.RunDockerComposeCmdNoCheck(t, args...)\n\tres.Assert(t, icmd.Success)\n\treturn res\n}\n\n// RunDockerComposeCmdNoCheck runs a docker compose command, don't presume of any expectation and returns a result\nfunc (c *CLI) RunDockerComposeCmdNoCheck(t testing.TB, args ...string) *icmd.Result {\n\tt.Helper()\n\tcmd := c.NewDockerComposeCmd(t, args...)\n\tcmd.Stdout = os.Stdout\n\tt.Logf(\"Running command: %s\", strings.Join(cmd.Command, \" \"))\n\treturn icmd.RunCmd(cmd)\n}\n\n// NewDockerComposeCmd creates a command object for Compose, either in plugin\n// or standalone mode (based on build tags).\nfunc (c *CLI) NewDockerComposeCmd(t testing.TB, args ...string) icmd.Cmd {\n\tt.Helper()\n\tif composeStandaloneMode {\n\t\treturn c.NewCmd(ComposeStandalonePath(t), args...)\n\t}\n\targs = append([]string{\"compose\"}, args...)\n\treturn c.NewCmd(DockerExecutableName, args...)\n}\n\n// ComposeStandalonePath returns the path to the locally-built Compose\n// standalone binary from the repo.\n//\n// This function will fail the test immediately if invoked when not running\n// in standalone test mode.\nfunc ComposeStandalonePath(t testing.TB) string {\n\tt.Helper()\n\tif !composeStandaloneMode {\n\t\trequire.Fail(t, \"Not running in standalone mode\")\n\t}\n\tcomposeBinary, err := findExecutable(DockerComposeExecutableName)\n\trequire.NoError(t, err, \"Could not find standalone Compose binary (%q)\",\n\t\tDockerComposeExecutableName)\n\treturn composeBinary\n}\n\n// StdoutContains returns a predicate on command result expecting a string in stdout\nfunc StdoutContains(expected string) func(*icmd.Result) bool {\n\treturn func(res *icmd.Result) bool {\n\t\treturn strings.Contains(res.Stdout(), expected)\n\t}\n}\n\nfunc IsHealthy(service string) func(res *icmd.Result) bool {\n\treturn func(res *icmd.Result) bool {\n\t\ttype state struct {\n\t\t\tName   string `json:\"name\"`\n\t\t\tHealth string `json:\"health\"`\n\t\t}\n\n\t\tdecoder := json.NewDecoder(strings.NewReader(res.Stdout()))\n\t\tfor decoder.More() {\n\t\t\tps := state{}\n\t\t\terr := decoder.Decode(&ps)\n\t\t\tif err != nil {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif ps.Name == service && ps.Health == \"healthy\" {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t\treturn false\n\t}\n}\n\n// WaitForCmdResult try to execute a cmd until resulting output matches given predicate\nfunc (c *CLI) WaitForCmdResult(\n\tt testing.TB,\n\tcommand icmd.Cmd,\n\tpredicate func(*icmd.Result) bool,\n\ttimeout time.Duration,\n\tdelay time.Duration,\n) {\n\tt.Helper()\n\tassert.Assert(t, timeout.Nanoseconds() > delay.Nanoseconds(), \"timeout must be greater than delay\")\n\tvar res *icmd.Result\n\tcheckStopped := func(logt poll.LogT) poll.Result {\n\t\tfmt.Printf(\"\\t[%s] %s\\n\", t.Name(), strings.Join(command.Command, \" \"))\n\t\tres = icmd.RunCmd(command)\n\t\tif !predicate(res) {\n\t\t\treturn poll.Continue(\"Cmd output did not match requirement: %q\", res.Combined())\n\t\t}\n\t\treturn poll.Success()\n\t}\n\tpoll.WaitOn(t, checkStopped, poll.WithDelay(delay), poll.WithTimeout(timeout))\n}\n\n// WaitForCondition wait for predicate to execute to true\nfunc (c *CLI) WaitForCondition(\n\tt testing.TB,\n\tpredicate func() (bool, string),\n\ttimeout time.Duration,\n\tdelay time.Duration,\n) {\n\tt.Helper()\n\tcheckStopped := func(logt poll.LogT) poll.Result {\n\t\tpass, description := predicate()\n\t\tif !pass {\n\t\t\treturn poll.Continue(\"Condition not met: %q\", description)\n\t\t}\n\t\treturn poll.Success()\n\t}\n\tpoll.WaitOn(t, checkStopped, poll.WithDelay(delay), poll.WithTimeout(timeout))\n}\n\n// Lines split output into lines\nfunc Lines(output string) []string {\n\treturn strings.Split(strings.TrimSpace(output), \"\\n\")\n}\n\n// HTTPGetWithRetry performs an HTTP GET on an `endpoint`, using retryDelay also as a request timeout.\n// In the case of an error or the response status is not the expected one, it retries the same request,\n// returning the response body as a string (empty if we could not reach it)\nfunc HTTPGetWithRetry(\n\tt testing.TB,\n\tendpoint string,\n\texpectedStatus int,\n\tretryDelay time.Duration,\n\ttimeout time.Duration,\n) string {\n\tt.Helper()\n\tvar (\n\t\tr   *http.Response\n\t\terr error\n\t)\n\tclient := &http.Client{\n\t\tTimeout: retryDelay,\n\t}\n\tfmt.Printf(\"\\t[%s] GET %s\\n\", t.Name(), endpoint)\n\tcheckUp := func(t poll.LogT) poll.Result {\n\t\tr, err = client.Get(endpoint)\n\t\tif err != nil {\n\t\t\treturn poll.Continue(\"reaching %q: Error %s\", endpoint, err.Error())\n\t\t}\n\t\tif r.StatusCode == expectedStatus {\n\t\t\treturn poll.Success()\n\t\t}\n\t\treturn poll.Continue(\"reaching %q: %d != %d\", endpoint, r.StatusCode, expectedStatus)\n\t}\n\tpoll.WaitOn(t, checkUp, poll.WithDelay(retryDelay), poll.WithTimeout(timeout))\n\tif r != nil {\n\t\tb, err := io.ReadAll(r.Body)\n\t\tassert.NilError(t, err)\n\t\treturn string(b)\n\t}\n\treturn \"\"\n}\n\nfunc (c *CLI) cleanupWithDown(t testing.TB, project string, args ...string) {\n\tt.Helper()\n\tc.RunDockerComposeCmd(t, append([]string{\"-p\", project, \"down\", \"-v\", \"--remove-orphans\"}, args...)...)\n}\n"
  },
  {
    "path": "pkg/e2e/healthcheck_test.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestStartInterval(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-start-interval\"\n\n\tt.Cleanup(func() {\n\t\tc.cleanupWithDown(t, projectName)\n\t})\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/start_interval/compose.yaml\", \"--project-name\", projectName, \"up\", \"--wait\", \"-d\", \"error\")\n\tres.Assert(t, icmd.Expected{ExitCode: 1, Err: \"healthcheck.start_interval requires healthcheck.start_period to be set\"})\n\n\ttimeout := time.After(30 * time.Second)\n\tdone := make(chan bool)\n\tgo func() {\n\t\t//nolint:nolintlint,testifylint // helper asserts inside goroutine; acceptable in this e2e test\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/start_interval/compose.yaml\", \"--project-name\", projectName, \"up\", \"--wait\", \"-d\", \"test\")\n\t\tout := res.Combined()\n\t\tassert.Assert(t, strings.Contains(out, \"Healthy\"), out)\n\t\tdone <- true\n\t}()\n\n\tselect {\n\tcase <-timeout:\n\t\tt.Fatal(\"test did not finish in time\")\n\tcase <-done:\n\t\tbreak\n\t}\n}\n"
  },
  {
    "path": "pkg/e2e/hooks_test.go",
    "content": "/*\nCopyright 2023 Docker Compose CLI authors\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n\thttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\npackage e2e\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestPostStartHookInError(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"hooks-post-start-failure\"\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"-v\", \"--remove-orphans\", \"-t\", \"0\")\n\t})\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/hooks/poststart/compose-error.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tres.Assert(t, icmd.Expected{ExitCode: 1})\n\tassert.Assert(t, strings.Contains(res.Combined(), \"test hook exited with status 127\"), res.Combined())\n}\n\nfunc TestPostStartHookSuccess(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"hooks-post-start-success\"\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"-v\", \"--remove-orphans\", \"-t\", \"0\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/hooks/poststart/compose-success.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tres.Assert(t, icmd.Expected{ExitCode: 0})\n}\n\nfunc TestPreStopHookSuccess(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"hooks-pre-stop-success\"\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/hooks/prestop/compose-success.yaml\", \"--project-name\", projectName, \"down\", \"-v\", \"--remove-orphans\", \"-t\", \"0\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/hooks/prestop/compose-success.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\n\tres = c.RunDockerComposeCmd(t, \"-f\", \"fixtures/hooks/prestop/compose-success.yaml\", \"--project-name\", projectName, \"down\", \"-v\", \"--remove-orphans\", \"-t\", \"0\")\n\tres.Assert(t, icmd.Expected{ExitCode: 0})\n}\n\nfunc TestPreStopHookInError(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"hooks-pre-stop-failure\"\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/hooks/prestop/compose-success.yaml\", \"--project-name\", projectName, \"down\", \"-v\", \"--remove-orphans\", \"-t\", \"0\")\n\t})\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/hooks/prestop/compose-error.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\n\tres = c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/hooks/prestop/compose-error.yaml\", \"--project-name\", projectName, \"down\", \"-v\", \"--remove-orphans\", \"-t\", \"0\")\n\tres.Assert(t, icmd.Expected{ExitCode: 1})\n\tassert.Assert(t, strings.Contains(res.Combined(), \"sample hook exited with status 127\"))\n}\n\nfunc TestPreStopHookSuccessWithPreviousStop(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"hooks-pre-stop-success-with-previous-stop\"\n\n\tt.Cleanup(func() {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/hooks/compose.yaml\", \"--project-name\", projectName, \"down\", \"-v\", \"--remove-orphans\", \"-t\", \"0\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/hooks/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\n\tres = c.RunDockerComposeCmd(t, \"-f\", \"fixtures/hooks/compose.yaml\", \"--project-name\", projectName, \"stop\", \"sample\")\n\tres.Assert(t, icmd.Expected{ExitCode: 0})\n}\n\nfunc TestPostStartAndPreStopHook(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"hooks-post-start-and-pre-stop\"\n\n\tt.Cleanup(func() {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/hooks/compose.yaml\", \"--project-name\", projectName, \"down\", \"-v\", \"--remove-orphans\", \"-t\", \"0\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/hooks/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tres.Assert(t, icmd.Expected{ExitCode: 0})\n}\n"
  },
  {
    "path": "pkg/e2e/ipc_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestIPC(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tconst projectName = \"ipc_e2e\"\n\tvar cid string\n\tt.Run(\"create ipc mode container\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"run\", \"-d\", \"--rm\", \"--ipc=shareable\", \"--name\", \"ipc_mode_container\", \"alpine\",\n\t\t\t\"top\")\n\t\tcid = strings.Trim(res.Stdout(), \"\\n\")\n\t})\n\n\tt.Run(\"up\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/ipc-test/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\t})\n\n\tt.Run(\"check running project\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\")\n\t\tres.Assert(t, icmd.Expected{Out: `shareable`})\n\t})\n\n\tt.Run(\"check ipcmode in container inspect\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"inspect\", projectName+\"-shareable-1\")\n\t\tres.Assert(t, icmd.Expected{Out: `\"IpcMode\": \"shareable\",`})\n\n\t\tres = c.RunDockerCmd(t, \"inspect\", projectName+\"-service-1\")\n\t\tres.Assert(t, icmd.Expected{Out: `\"IpcMode\": \"container:`})\n\n\t\tres = c.RunDockerCmd(t, \"inspect\", projectName+\"-container-1\")\n\t\tres.Assert(t, icmd.Expected{Out: fmt.Sprintf(`\"IpcMode\": \"container:%s\",`, cid)})\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\tt.Run(\"remove ipc mode container\", func(t *testing.T) {\n\t\t_ = c.RunDockerCmd(t, \"rm\", \"-f\", \"ipc_mode_container\")\n\t})\n}\n"
  },
  {
    "path": "pkg/e2e/logs_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n\t\"gotest.tools/v3/poll\"\n)\n\nfunc TestLocalComposeLogs(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tconst projectName = \"compose-e2e-logs\"\n\n\tt.Run(\"up\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/logs-test/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\t})\n\n\tt.Run(\"logs\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"logs\")\n\t\tres.Assert(t, icmd.Expected{Out: `PING localhost`})\n\t\tres.Assert(t, icmd.Expected{Out: `hello`})\n\t})\n\n\tt.Run(\"logs ping\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"logs\", \"ping\")\n\t\tres.Assert(t, icmd.Expected{Out: `PING localhost`})\n\t\tassert.Assert(t, !strings.Contains(res.Stdout(), \"hello\"))\n\t})\n\n\tt.Run(\"logs hello\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"logs\", \"hello\", \"ping\")\n\t\tres.Assert(t, icmd.Expected{Out: `PING localhost`})\n\t\tres.Assert(t, icmd.Expected{Out: `hello`})\n\t})\n\n\tt.Run(\"logs hello index\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"logs\", \"--index\", \"2\", \"hello\")\n\n\t\t//  docker-compose logs hello\n\t\t// logs-test-hello-2  | hello\n\t\t// logs-test-hello-1  | hello\n\t\tt.Log(res.Stdout())\n\t\tassert.Assert(t, !strings.Contains(res.Stdout(), \"hello-1\"))\n\t\tassert.Assert(t, strings.Contains(res.Stdout(), \"hello-2\"))\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n}\n\nfunc TestLocalComposeLogsFollow(t *testing.T) {\n\tc := NewCLI(t, WithEnv(\"REPEAT=20\"))\n\tconst projectName = \"compose-e2e-logs\"\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/logs-test/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\", \"ping\")\n\n\tcmd := c.NewDockerComposeCmd(t, \"--project-name\", projectName, \"logs\", \"-f\")\n\tres := icmd.StartCmd(cmd)\n\tt.Cleanup(func() {\n\t\t_ = res.Cmd.Process.Kill()\n\t})\n\n\tpoll.WaitOn(t, expectOutput(res, \"ping-1 \"), poll.WithDelay(100*time.Millisecond), poll.WithTimeout(1*time.Second))\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/logs-test/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\n\tpoll.WaitOn(t, expectOutput(res, \"hello-1 \"), poll.WithDelay(100*time.Millisecond), poll.WithTimeout(1*time.Second))\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/logs-test/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\", \"--scale\", \"ping=2\", \"ping\")\n\n\tpoll.WaitOn(t, expectOutput(res, \"ping-2 \"), poll.WithDelay(100*time.Millisecond), poll.WithTimeout(20*time.Second))\n}\n\nfunc TestLocalComposeLargeLogs(t *testing.T) {\n\tconst projectName = \"compose-e2e-large_logs\"\n\tfile := filepath.Join(t.TempDir(), \"large.txt\")\n\tc := NewCLI(t, WithEnv(\"FILE=\"+file))\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tf, err := os.Create(file)\n\tassert.NilError(t, err)\n\tfor i := range 300_000 {\n\t\t_, err := io.WriteString(f, fmt.Sprintf(\"This is line %d in a laaaarge text file\\n\", i))\n\t\tassert.NilError(t, err)\n\t}\n\tassert.NilError(t, f.Close())\n\n\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/logs-test/cat.yaml\", \"--project-name\", projectName, \"up\", \"--abort-on-container-exit\", \"--menu=false\")\n\tcmd.Stdout = io.Discard\n\tres := icmd.RunCmd(cmd)\n\tres.Assert(t, icmd.Expected{Out: \"test-1 exited with code 0\"})\n}\n\nfunc expectOutput(res *icmd.Result, expected string) func(t poll.LogT) poll.Result {\n\treturn func(t poll.LogT) poll.Result {\n\t\tif strings.Contains(res.Stdout(), expected) {\n\t\t\treturn poll.Success()\n\t\t}\n\t\treturn poll.Continue(\"condition not met\")\n\t}\n}\n"
  },
  {
    "path": "pkg/e2e/main_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"os\"\n\t\"testing\"\n)\n\nfunc TestMain(m *testing.M) {\n\texitCode := m.Run()\n\tos.Exit(exitCode)\n}\n"
  },
  {
    "path": "pkg/e2e/model_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"testing\"\n)\n\nfunc TestComposeModel(t *testing.T) {\n\tt.Skip(\"waiting for docker-model release\")\n\tc := NewParallelCLI(t)\n\tdefer c.cleanupWithDown(t, \"model-test\")\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/model/compose.yaml\", \"run\", \"test\", \"sh\", \"-c\", \"curl ${FOO_URL}\")\n}\n"
  },
  {
    "path": "pkg/e2e/networks_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestNetworks(t *testing.T) {\n\t// fixture is shared with TestNetworkModes and is not safe to run concurrently\n\tconst projectName = \"network-e2e\"\n\tc := NewCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=\"+projectName,\n\t\t\"COMPOSE_FILE=./fixtures/network-test/compose.yaml\",\n\t))\n\n\tc.RunDockerComposeCmd(t, \"down\", \"-t0\", \"-v\")\n\n\tc.RunDockerComposeCmd(t, \"up\", \"-d\")\n\n\tres := c.RunDockerComposeCmd(t, \"ps\")\n\tres.Assert(t, icmd.Expected{Out: `web`})\n\n\tendpoint := \"http://localhost:80\"\n\toutput := HTTPGetWithRetry(t, endpoint+\"/words/noun\", http.StatusOK, 2*time.Second, 20*time.Second)\n\tassert.Assert(t, strings.Contains(output, `\"word\":`))\n\n\tres = c.RunDockerCmd(t, \"network\", \"ls\")\n\tres.Assert(t, icmd.Expected{Out: projectName + \"_dbnet\"})\n\tres.Assert(t, icmd.Expected{Out: \"microservices\"})\n\n\tres = c.RunDockerComposeCmd(t, \"port\", \"words\", \"8080\")\n\tres.Assert(t, icmd.Expected{Out: `0.0.0.0:8080`})\n\n\tc.RunDockerComposeCmd(t, \"down\", \"-t0\", \"-v\")\n\tres = c.RunDockerCmd(t, \"network\", \"ls\")\n\tassert.Assert(t, !strings.Contains(res.Combined(), projectName), res.Combined())\n\tassert.Assert(t, !strings.Contains(res.Combined(), \"microservices\"), res.Combined())\n}\n\nfunc TestNetworkAliases(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tconst projectName = \"network_alias_e2e\"\n\tdefer c.cleanupWithDown(t, projectName)\n\n\tt.Run(\"up\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/network-alias/compose.yaml\", \"--project-name\", projectName, \"up\",\n\t\t\t\"-d\")\n\t})\n\n\tt.Run(\"curl alias\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/network-alias/compose.yaml\", \"--project-name\", projectName,\n\t\t\t\"exec\", \"-T\", \"container1\", \"curl\", \"http://alias-of-container2/\")\n\t\tassert.Assert(t, strings.Contains(res.Stdout(), \"Welcome to nginx!\"), res.Stdout())\n\t})\n\n\tt.Run(\"curl links\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/network-alias/compose.yaml\", \"--project-name\", projectName,\n\t\t\t\"exec\", \"-T\", \"container1\", \"curl\", \"http://container/\")\n\t\tassert.Assert(t, strings.Contains(res.Stdout(), \"Welcome to nginx!\"), res.Stdout())\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n}\n\nfunc TestNetworkLinks(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tconst projectName = \"network_link_e2e\"\n\n\tt.Run(\"up\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/network-links/compose.yaml\", \"--project-name\", projectName, \"up\",\n\t\t\t\"-d\")\n\t})\n\n\tt.Run(\"curl links in default bridge network\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/network-links/compose.yaml\", \"--project-name\", projectName,\n\t\t\t\"exec\", \"-T\", \"container2\", \"curl\", \"http://container1/\")\n\t\tassert.Assert(t, strings.Contains(res.Stdout(), \"Welcome to nginx!\"), res.Stdout())\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n}\n\nfunc TestIPAMConfig(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tconst projectName = \"ipam_e2e\"\n\n\tt.Run(\"ensure we do not reuse previous networks\", func(t *testing.T) {\n\t\tc.RunDockerOrExitError(t, \"network\", \"rm\", projectName+\"_default\")\n\t})\n\n\tt.Run(\"up\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/ipam/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\t})\n\n\tt.Run(\"ensure service get fixed IP assigned\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"inspect\", projectName+\"-foo-1\", \"-f\",\n\t\t\tfmt.Sprintf(`{{ $network := index .NetworkSettings.Networks \"%s_default\" }}{{ $network.IPAMConfig.IPv4Address }}`, projectName))\n\t\tres.Assert(t, icmd.Expected{Out: \"10.1.0.100\"})\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n}\n\nfunc TestNetworkModes(t *testing.T) {\n\t// fixture is shared with TestNetworks and is not safe to run concurrently\n\tc := NewCLI(t)\n\n\tconst projectName = \"network_mode_service_run\"\n\tdefer c.cleanupWithDown(t, projectName)\n\n\tt.Run(\"run with service mode dependency\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/network-test/compose.yaml\", \"--project-name\", projectName, \"run\", \"-T\", \"mydb\", \"echo\", \"success\")\n\t\tres.Assert(t, icmd.Expected{Out: \"success\"})\n\t})\n}\n\nfunc TestNetworkConfigChanged(t *testing.T) {\n\tt.Skip(\"unstable\")\n\t// fixture is shared with TestNetworks and is not safe to run concurrently\n\tc := NewCLI(t)\n\tconst projectName = \"network_config_change\"\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/network-test/compose.subnet.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"exec\", \"test\", \"hostname\", \"-i\")\n\tres.Assert(t, icmd.Expected{Out: \"172.99.0.\"})\n\tres.Combined()\n\n\tcmd := c.NewCmdWithEnv([]string{\"SUBNET=192.168.0.0/16\"},\n\t\t\"docker\", \"compose\", \"-f\", \"./fixtures/network-test/compose.subnet.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tres = icmd.RunCmd(cmd)\n\tres.Assert(t, icmd.Success)\n\tout := res.Combined()\n\tfmt.Println(out)\n\n\tres = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"exec\", \"test\", \"hostname\", \"-i\")\n\tres.Assert(t, icmd.Expected{Out: \"192.168.0.\"})\n}\n\nfunc TestMacAddress(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"network_mac_address\"\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/network-test/mac_address.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tt.Cleanup(func() {\n\t\tc.cleanupWithDown(t, projectName)\n\t})\n\tres := c.RunDockerCmd(t, \"inspect\", fmt.Sprintf(\"%s-test-1\", projectName), \"-f\", \"{{ (index .NetworkSettings.Networks \\\"network_mac_address_default\\\" ).MacAddress }}\")\n\tres.Assert(t, icmd.Expected{Out: \"00:e0:84:35:d0:e8\"})\n}\n\nfunc TestInterfaceName(t *testing.T) {\n\tc := NewCLI(t)\n\n\tversion := c.RunDockerCmd(t, \"version\", \"-f\", \"{{.Server.Version}}\")\n\tmajor, _, found := strings.Cut(version.Combined(), \".\")\n\tassert.Assert(t, found)\n\tif major == \"26\" || major == \"27\" {\n\t\tt.Skip(\"Skipping test due to docker version < 28\")\n\t}\n\n\tconst projectName = \"network_interface_name\"\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/network-interface-name/compose.yaml\", \"--project-name\", projectName, \"run\", \"test\")\n\tt.Cleanup(func() {\n\t\tc.cleanupWithDown(t, projectName)\n\t})\n\tres.Assert(t, icmd.Expected{Out: \"foobar@\"})\n}\n\nfunc TestNetworkRecreate(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"network_recreate\"\n\tt.Cleanup(func() {\n\t\tc.cleanupWithDown(t, projectName)\n\t})\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/network-recreate/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\n\tc = NewCLI(t, WithEnv(\"FOO=bar\"))\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/network-recreate/compose.yaml\", \"--project-name\", projectName, \"--progress=plain\", \"up\", \"-d\")\n\terr := res.Stderr()\n\tfmt.Println(err)\n\thasStopped := strings.Contains(err, \"Stopped\")\n\thasResumed := strings.Contains(err, \"Started\") || strings.Contains(err, \"Recreated\")\n\tif !hasStopped || !hasResumed {\n\t\tt.Fatalf(\"unexpected output, missing expected events, stderr: %s\", err)\n\t}\n}\n"
  },
  {
    "path": "pkg/e2e/noDeps_test.go",
    "content": "//go:build !windows\n\n/*\n   Copyright 2022 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"fmt\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestNoDepsVolumeFrom(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-no-deps-volume-from\"\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/no-deps/volume-from.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/no-deps/volume-from.yaml\", \"--project-name\", projectName, \"up\", \"--no-deps\", \"-d\", \"app\")\n\n\tc.RunDockerCmd(t, \"rm\", \"-f\", fmt.Sprintf(\"%s-db-1\", projectName))\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/no-deps/volume-from.yaml\", \"--project-name\", projectName, \"up\", \"--no-deps\", \"-d\", \"app\")\n\tres.Assert(t, icmd.Expected{ExitCode: 1, Err: \"cannot share volume with service db: container missing\"})\n}\n\nfunc TestNoDepsNetworkMode(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-no-deps-network-mode\"\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/no-deps/network-mode.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/no-deps/network-mode.yaml\", \"--project-name\", projectName, \"up\", \"--no-deps\", \"-d\", \"app\")\n\n\tc.RunDockerCmd(t, \"rm\", \"-f\", fmt.Sprintf(\"%s-db-1\", projectName))\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/no-deps/network-mode.yaml\", \"--project-name\", projectName, \"up\", \"--no-deps\", \"-d\", \"app\")\n\tres.Assert(t, icmd.Expected{ExitCode: 1, Err: \"cannot share network namespace with service db: container missing\"})\n}\n"
  },
  {
    "path": "pkg/e2e/orphans_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc TestRemoveOrphans(t *testing.T) {\n\tc := NewCLI(t)\n\n\tconst projectName = \"compose-e2e-orphans\"\n\tdefer c.cleanupWithDown(t, projectName)\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/orphans/compose.yaml\", \"-p\", projectName, \"run\", \"orphan\")\n\tres := c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\", \"--all\")\n\tassert.Check(t, strings.Contains(res.Combined(), \"compose-e2e-orphans-orphan-run-\"))\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/orphans/compose.yaml\", \"-p\", projectName, \"up\", \"-d\")\n\n\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\", \"--all\")\n\tassert.Check(t, !strings.Contains(res.Combined(), \"compose-e2e-orphans-orphan-run-\"))\n}\n"
  },
  {
    "path": "pkg/e2e/pause_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/require\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestPause(t *testing.T) {\n\tif _, ok := os.LookupEnv(\"CI\"); ok {\n\t\tt.Skip(\"Skipping test on CI... flaky\")\n\t}\n\tcli := NewParallelCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=e2e-pause\",\n\t\t\"COMPOSE_FILE=./fixtures/pause/compose.yaml\"))\n\n\tcleanup := func() {\n\t\tcli.RunDockerComposeCmd(t, \"down\", \"-v\", \"--remove-orphans\", \"-t\", \"0\")\n\t}\n\tcleanup()\n\tt.Cleanup(cleanup)\n\n\t// launch both services and verify that they are accessible\n\tcli.RunDockerComposeCmd(t, \"up\", \"-d\")\n\turls := map[string]string{\n\t\t\"a\": urlForService(t, cli, \"a\", 80),\n\t\t\"b\": urlForService(t, cli, \"b\", 80),\n\t}\n\tfor _, url := range urls {\n\t\tHTTPGetWithRetry(t, url, http.StatusOK, 50*time.Millisecond, 20*time.Second)\n\t}\n\n\t// pause a and verify that it can no longer be hit but b still can\n\tcli.RunDockerComposeCmd(t, \"pause\", \"a\")\n\thttpClient := http.Client{Timeout: 250 * time.Millisecond}\n\tresp, err := httpClient.Get(urls[\"a\"])\n\tif resp != nil {\n\t\t_ = resp.Body.Close()\n\t}\n\trequire.Error(t, err, \"a should no longer respond\")\n\tvar netErr net.Error\n\terrors.As(err, &netErr)\n\trequire.True(t, netErr.Timeout(), \"Error should have indicated a timeout\")\n\tHTTPGetWithRetry(t, urls[\"b\"], http.StatusOK, 50*time.Millisecond, 5*time.Second)\n\n\t// unpause a and verify that both containers work again\n\tcli.RunDockerComposeCmd(t, \"unpause\", \"a\")\n\tfor _, url := range urls {\n\t\tHTTPGetWithRetry(t, url, http.StatusOK, 50*time.Millisecond, 5*time.Second)\n\t}\n}\n\nfunc TestPauseServiceNotRunning(t *testing.T) {\n\tcli := NewParallelCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=e2e-pause-svc-not-running\",\n\t\t\"COMPOSE_FILE=./fixtures/pause/compose.yaml\"))\n\n\tcleanup := func() {\n\t\tcli.RunDockerComposeCmd(t, \"down\", \"-v\", \"--remove-orphans\", \"-t\", \"0\")\n\t}\n\tcleanup()\n\tt.Cleanup(cleanup)\n\n\t// pause a and verify that it can no longer be hit but b still can\n\tres := cli.RunDockerComposeCmdNoCheck(t, \"pause\", \"a\")\n\n\t// TODO: `docker pause` errors in this case, should Compose be consistent?\n\tres.Assert(t, icmd.Expected{ExitCode: 0})\n}\n\nfunc TestPauseServiceAlreadyPaused(t *testing.T) {\n\tcli := NewParallelCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=e2e-pause-svc-already-paused\",\n\t\t\"COMPOSE_FILE=./fixtures/pause/compose.yaml\"))\n\n\tcleanup := func() {\n\t\tcli.RunDockerComposeCmd(t, \"down\", \"-v\", \"--remove-orphans\", \"-t\", \"0\")\n\t}\n\tcleanup()\n\tt.Cleanup(cleanup)\n\n\t// launch a and wait for it to come up\n\tcli.RunDockerComposeCmd(t, \"up\", \"--menu=false\", \"--wait\", \"a\")\n\tHTTPGetWithRetry(t, urlForService(t, cli, \"a\", 80), http.StatusOK, 50*time.Millisecond, 10*time.Second)\n\n\t// pause a twice - first time should pass, second time fail\n\tcli.RunDockerComposeCmd(t, \"pause\", \"a\")\n\tres := cli.RunDockerComposeCmdNoCheck(t, \"pause\", \"a\")\n\tres.Assert(t, icmd.Expected{ExitCode: 1, Err: \"already paused\"})\n}\n\nfunc TestPauseServiceDoesNotExist(t *testing.T) {\n\tcli := NewParallelCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=e2e-pause-svc-not-exist\",\n\t\t\"COMPOSE_FILE=./fixtures/pause/compose.yaml\"))\n\n\tcleanup := func() {\n\t\tcli.RunDockerComposeCmd(t, \"down\", \"-v\", \"--remove-orphans\", \"-t\", \"0\")\n\t}\n\tcleanup()\n\tt.Cleanup(cleanup)\n\n\t// pause a and verify that it can no longer be hit but b still can\n\tres := cli.RunDockerComposeCmdNoCheck(t, \"pause\", \"does_not_exist\")\n\t// TODO: `compose down does_not_exist` and similar error, this should too\n\tres.Assert(t, icmd.Expected{ExitCode: 0})\n}\n\nfunc urlForService(t testing.TB, cli *CLI, service string, targetPort int) string {\n\tt.Helper()\n\treturn fmt.Sprintf(\n\t\t\"http://localhost:%d\",\n\t\tpublishedPortForService(t, cli, service, targetPort),\n\t)\n}\n\nfunc publishedPortForService(t testing.TB, cli *CLI, service string, targetPort int) int {\n\tt.Helper()\n\tres := cli.RunDockerComposeCmd(t, \"ps\", \"--format=json\", service)\n\tvar svc struct {\n\t\tPublishers []struct {\n\t\t\tTargetPort    int\n\t\t\tPublishedPort int\n\t\t}\n\t}\n\trequire.NoError(t, json.Unmarshal([]byte(res.Stdout()), &svc),\n\t\t\"Failed to parse `%s` output\", res.Cmd.String())\n\tfor _, pp := range svc.Publishers {\n\t\tif pp.TargetPort == targetPort {\n\t\t\treturn pp.PublishedPort\n\t\t}\n\t}\n\trequire.Failf(t, \"No published port for target port\",\n\t\t\"Target port: %d\\nService: %s\", targetPort, res.Combined())\n\treturn -1\n}\n"
  },
  {
    "path": "pkg/e2e/profiles_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nconst (\n\tprofiledService = \"profiled-service\"\n\tregularService  = \"regular-service\"\n)\n\nfunc TestExplicitProfileUsage(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"compose-e2e-explicit-profiles\"\n\tconst profileName = \"test-profile\"\n\n\tt.Run(\"compose up with profile\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/compose.yaml\",\n\t\t\t\"-p\", projectName, \"--profile\", profileName, \"up\", \"-d\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\")\n\t\tres.Assert(t, icmd.Expected{Out: regularService})\n\t\tres.Assert(t, icmd.Expected{Out: profiledService})\n\t})\n\n\tt.Run(\"compose stop with profile\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/compose.yaml\",\n\t\t\t\"-p\", projectName, \"--profile\", profileName, \"stop\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\", \"--status\", \"running\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), regularService))\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), profiledService))\n\t})\n\n\tt.Run(\"compose start with profile\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/compose.yaml\",\n\t\t\t\"-p\", projectName, \"--profile\", profileName, \"start\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\", \"--status\", \"running\")\n\t\tres.Assert(t, icmd.Expected{Out: regularService})\n\t\tres.Assert(t, icmd.Expected{Out: profiledService})\n\t})\n\n\tt.Run(\"compose restart with profile\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/compose.yaml\",\n\t\t\t\"-p\", projectName, \"--profile\", profileName, \"restart\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\", \"--status\", \"running\")\n\t\tres.Assert(t, icmd.Expected{Out: regularService})\n\t\tres.Assert(t, icmd.Expected{Out: profiledService})\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tt.Run(\"check containers after down\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"ps\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), projectName), res.Combined())\n\t})\n}\n\nfunc TestNoProfileUsage(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"compose-e2e-no-profiles\"\n\n\tt.Run(\"compose up without profile\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/compose.yaml\",\n\t\t\t\"-p\", projectName, \"up\", \"-d\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\")\n\t\tres.Assert(t, icmd.Expected{Out: regularService})\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), profiledService))\n\t})\n\n\tt.Run(\"compose stop without profile\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/compose.yaml\",\n\t\t\t\"-p\", projectName, \"stop\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\", \"--status\", \"running\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), regularService))\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), profiledService))\n\t})\n\n\tt.Run(\"compose start without profile\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/compose.yaml\",\n\t\t\t\"-p\", projectName, \"start\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\", \"--status\", \"running\")\n\t\tres.Assert(t, icmd.Expected{Out: regularService})\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), profiledService))\n\t})\n\n\tt.Run(\"compose restart without profile\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/compose.yaml\",\n\t\t\t\"-p\", projectName, \"restart\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\", \"--status\", \"running\")\n\t\tres.Assert(t, icmd.Expected{Out: regularService})\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), profiledService))\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tt.Run(\"check containers after down\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"ps\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), projectName), res.Combined())\n\t})\n}\n\nfunc TestActiveProfileViaTargetedService(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"compose-e2e-via-target-service-profiles\"\n\tconst profileName = \"test-profile\"\n\n\tt.Run(\"compose up with service name\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/compose.yaml\",\n\t\t\t\"-p\", projectName, \"up\", profiledService, \"-d\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), regularService))\n\t\tres.Assert(t, icmd.Expected{Out: profiledService})\n\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"--profile\", profileName, \"ps\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), regularService))\n\t\tres.Assert(t, icmd.Expected{Out: profiledService})\n\t})\n\n\tt.Run(\"compose stop with service name\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/compose.yaml\",\n\t\t\t\"-p\", projectName, \"stop\", profiledService)\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\", \"--status\", \"running\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), regularService))\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), profiledService))\n\t})\n\n\tt.Run(\"compose start with service name\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/compose.yaml\",\n\t\t\t\"-p\", projectName, \"start\", profiledService)\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\", \"--status\", \"running\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), regularService))\n\t\tres.Assert(t, icmd.Expected{Out: profiledService})\n\t})\n\n\tt.Run(\"compose restart with service name\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/compose.yaml\",\n\t\t\t\"-p\", projectName, \"restart\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\", \"--status\", \"running\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), regularService))\n\t\tres.Assert(t, icmd.Expected{Out: profiledService})\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tt.Run(\"check containers after down\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"ps\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), projectName), res.Combined())\n\t})\n}\n\nfunc TestDotEnvProfileUsage(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"compose-e2e-dotenv-profiles\"\n\tconst profileName = \"test-profile\"\n\n\tt.Cleanup(func() {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tt.Run(\"compose up with profile\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/compose.yaml\",\n\t\t\t\"--env-file\", \"./fixtures/profiles/test-profile.env\",\n\t\t\t\"-p\", projectName, \"--profile\", profileName, \"up\", \"-d\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tres = c.RunDockerComposeCmd(t, \"-p\", projectName, \"ps\")\n\t\tres.Assert(t, icmd.Expected{Out: regularService})\n\t\tres.Assert(t, icmd.Expected{Out: profiledService})\n\t})\n}\n"
  },
  {
    "path": "pkg/e2e/providers_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"bufio\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"slices\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestDependsOnMultipleProviders(t *testing.T) {\n\tprovider, err := findExecutable(\"example-provider\")\n\tassert.NilError(t, err)\n\n\tpath := fmt.Sprintf(\"%s%s%s\", os.Getenv(\"PATH\"), string(os.PathListSeparator), filepath.Dir(provider))\n\tc := NewParallelCLI(t, WithEnv(\"PATH=\"+path))\n\tconst projectName = \"depends-on-multiple-providers\"\n\tt.Cleanup(func() {\n\t\tc.cleanupWithDown(t, projectName)\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/providers/depends-on-multiple-providers.yaml\", \"--project-name\", projectName, \"up\")\n\tres.Assert(t, icmd.Success)\n\tenv := getEnv(res.Combined(), false)\n\tassert.Check(t, slices.Contains(env, \"PROVIDER1_URL=https://magic.cloud/provider1\"), env)\n\tassert.Check(t, slices.Contains(env, \"PROVIDER2_URL=https://magic.cloud/provider2\"), env)\n}\n\nfunc getEnv(out string, run bool) []string {\n\tvar env []string\n\tscanner := bufio.NewScanner(strings.NewReader(out))\n\tfor scanner.Scan() {\n\t\tline := scanner.Text()\n\t\tif !run && strings.HasPrefix(line, \"test-1  | \") {\n\t\t\tenv = append(env, line[10:])\n\t\t}\n\t\tif run && strings.Contains(line, \"=\") && len(strings.Split(line, \"=\")) == 2 {\n\t\t\tenv = append(env, line)\n\t\t}\n\t}\n\tslices.Sort(env)\n\treturn env\n}\n"
  },
  {
    "path": "pkg/e2e/ps_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"encoding/json\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/require\"\n\t\"gotest.tools/v3/icmd\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nfunc TestPs(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-ps\"\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/ps-test/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\trequire.NoError(t, res.Error)\n\tt.Cleanup(func() {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tassert.Contains(t, res.Combined(), \"Container e2e-ps-busybox-1 Started\", res.Combined())\n\n\tt.Run(\"table\", func(t *testing.T) {\n\t\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/ps-test/compose.yaml\", \"--project-name\", projectName, \"ps\")\n\t\tlines := strings.Split(res.Stdout(), \"\\n\")\n\t\tassert.Len(t, lines, 4)\n\t\tcount := 0\n\t\tfor _, line := range lines[1:3] {\n\t\t\tif strings.Contains(line, \"e2e-ps-busybox-1\") {\n\t\t\t\tassert.Contains(t, line, \"127.0.0.1:8001->8000/tcp\")\n\t\t\t\tcount++\n\t\t\t}\n\t\t\tif strings.Contains(line, \"e2e-ps-nginx-1\") {\n\t\t\t\tassert.Contains(t, line, \"80/tcp, 443/tcp, 8080/tcp\")\n\t\t\t\tcount++\n\t\t\t}\n\t\t}\n\t\tassert.Equal(t, 2, count, \"Did not match both services:\\n\"+res.Combined())\n\t})\n\n\tt.Run(\"json\", func(t *testing.T) {\n\t\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/ps-test/compose.yaml\", \"--project-name\", projectName, \"ps\",\n\t\t\t\"--format\", \"json\")\n\t\ttype element struct {\n\t\t\tName       string\n\t\t\tProject    string\n\t\t\tPublishers api.PortPublishers\n\t\t}\n\t\tvar output []element\n\t\tout := res.Stdout()\n\t\tdec := json.NewDecoder(strings.NewReader(out))\n\t\tfor dec.More() {\n\t\t\tvar s element\n\t\t\trequire.NoError(t, dec.Decode(&s), \"Failed to unmarshal ps JSON output\")\n\t\t\toutput = append(output, s)\n\t\t}\n\n\t\tcount := 0\n\t\tassert.Len(t, output, 2)\n\t\tfor _, service := range output {\n\t\t\tassert.Equal(t, projectName, service.Project)\n\t\t\tpublishers := service.Publishers\n\t\t\tif service.Name == \"e2e-ps-busybox-1\" {\n\t\t\t\tassert.Len(t, publishers, 1)\n\t\t\t\tassert.Equal(t, api.PortPublishers{\n\t\t\t\t\t{\n\t\t\t\t\t\tURL:           \"127.0.0.1\",\n\t\t\t\t\t\tTargetPort:    8000,\n\t\t\t\t\t\tPublishedPort: 8001,\n\t\t\t\t\t\tProtocol:      \"tcp\",\n\t\t\t\t\t},\n\t\t\t\t}, publishers)\n\t\t\t\tcount++\n\t\t\t}\n\t\t\tif service.Name == \"e2e-ps-nginx-1\" {\n\t\t\t\tassert.Len(t, publishers, 3)\n\t\t\t\tassert.Equal(t, api.PortPublishers{\n\t\t\t\t\t{TargetPort: 80, Protocol: \"tcp\"},\n\t\t\t\t\t{TargetPort: 443, Protocol: \"tcp\"},\n\t\t\t\t\t{TargetPort: 8080, Protocol: \"tcp\"},\n\t\t\t\t}, publishers)\n\n\t\t\t\tcount++\n\t\t\t}\n\t\t}\n\t\tassert.Equal(t, 2, count, \"Did not match both services:\\n\"+res.Combined())\n\t})\n\n\tt.Run(\"ps --all\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"stop\")\n\t\trequire.NoError(t, res.Error)\n\n\t\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/ps-test/compose.yaml\", \"--project-name\", projectName, \"ps\")\n\t\tlines := strings.Split(res.Stdout(), \"\\n\")\n\t\tassert.Len(t, lines, 2)\n\n\t\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/ps-test/compose.yaml\", \"--project-name\", projectName, \"ps\", \"--all\")\n\t\tlines = strings.Split(res.Stdout(), \"\\n\")\n\t\tassert.Len(t, lines, 4)\n\t})\n\n\tt.Run(\"ps unknown\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"stop\")\n\t\trequire.NoError(t, res.Error)\n\n\t\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/ps-test/compose.yaml\", \"--project-name\", projectName, \"ps\", \"nginx\")\n\t\tres.Assert(t, icmd.Success)\n\n\t\tres = c.RunDockerComposeCmdNoCheck(t, \"-f\", \"./fixtures/ps-test/compose.yaml\", \"--project-name\", projectName, \"ps\", \"unknown\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 1, Err: \"no such service: unknown\"})\n\t})\n}\n"
  },
  {
    "path": "pkg/e2e/publish_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestPublishChecks(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"compose-e2e-explicit-profiles\"\n\n\tt.Run(\"publish error env_file\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"./fixtures/publish/compose-env-file.yml\",\n\t\t\t\"-p\", projectName, \"publish\", \"test/test\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 1, Err: `service \"serviceA\" has env_file declared.\nTo avoid leaking sensitive data,`})\n\t})\n\n\tt.Run(\"publish multiple errors env_file and environment\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"./fixtures/publish/compose-multi-env-config.yml\",\n\t\t\t\"-p\", projectName, \"publish\", \"test/test\")\n\t\t// we don't in which order the services will be loaded, so we can't predict the order of the error messages\n\t\tassert.Assert(t, strings.Contains(res.Combined(), `service \"serviceB\" has env_file declared.`), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), `To avoid leaking sensitive data, you must either explicitly allow the sending of environment variables by using the --with-env flag,\nor remove sensitive data from your Compose configuration\n`), res.Combined())\n\t})\n\n\tt.Run(\"publish success environment\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/publish/compose-environment.yml\",\n\t\t\t\"-p\", projectName, \"publish\", \"test/test\", \"--with-env\", \"-y\", \"--dry-run\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"test/test publishing\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"test/test published\"), res.Combined())\n\t})\n\n\tt.Run(\"publish success env_file\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/publish/compose-env-file.yml\",\n\t\t\t\"-p\", projectName, \"publish\", \"test/test\", \"--with-env\", \"-y\", \"--dry-run\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"test/test publishing\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"test/test published\"), res.Combined())\n\t})\n\n\tt.Run(\"publish with extends\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/publish/compose-with-extends.yml\",\n\t\t\t\"-p\", projectName, \"publish\", \"test/test\", \"--dry-run\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"test/test published\"), res.Combined())\n\t})\n\n\tt.Run(\"refuse to publish with bind mount\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/publish/compose-bind-mount.yml\",\n\t\t\t\"-p\", projectName, \"publish\", \"test/test\", \"--dry-run\")\n\t\tcmd.Stdin = strings.NewReader(\"n\\n\")\n\t\tres := icmd.RunCmd(cmd)\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tout := res.Combined()\n\t\tassert.Assert(t, strings.Contains(out, \"you are about to publish bind mounts declaration within your OCI artifact.\"), out)\n\t\tassert.Assert(t, strings.Contains(out, \"e2e/fixtures/publish:/user-data\"), out)\n\t\tassert.Assert(t, strings.Contains(out, \"Are you ok to publish these bind mount declarations?\"), out)\n\t\tassert.Assert(t, !strings.Contains(out, \"serviceA published\"), out)\n\t})\n\n\tt.Run(\"publish with bind mount\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/publish/compose-bind-mount.yml\",\n\t\t\t\"-p\", projectName, \"publish\", \"test/test\", \"--dry-run\")\n\t\tcmd.Stdin = strings.NewReader(\"y\\n\")\n\t\tres := icmd.RunCmd(cmd)\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"you are about to publish bind mounts declaration within your OCI artifact.\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"Are you ok to publish these bind mount declarations?\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"e2e/fixtures/publish:/user-data\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"test/test published\"), res.Combined())\n\t})\n\n\tt.Run(\"refuse to publish with build section only\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"./fixtures/publish/compose-build-only.yml\",\n\t\t\t\"-p\", projectName, \"publish\", \"test/test\", \"--with-env\", \"-y\", \"--dry-run\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 1})\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"your Compose stack cannot be published as it only contains a build section for service(s):\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"serviceA\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"serviceB\"), res.Combined())\n\t})\n\n\tt.Run(\"refuse to publish with local include\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"./fixtures/publish/compose-local-include.yml\",\n\t\t\t\"-p\", projectName, \"publish\", \"test/test\", \"--dry-run\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 1, Err: \"cannot publish compose file with local includes\"})\n\t})\n\n\tt.Run(\"detect sensitive data\", func(t *testing.T) {\n\t\tcmd := c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/publish/compose-sensitive.yml\",\n\t\t\t\"-p\", projectName, \"publish\", \"test/test\", \"--with-env\", \"--dry-run\")\n\t\tcmd.Stdin = strings.NewReader(\"n\\n\")\n\t\tres := icmd.RunCmd(cmd)\n\t\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\n\t\toutput := res.Combined()\n\t\tassert.Assert(t, strings.Contains(output, \"you are about to publish sensitive data within your OCI artifact.\\n\"), output)\n\t\tassert.Assert(t, strings.Contains(output, \"please double check that you are not leaking sensitive data\"), output)\n\t\tassert.Assert(t, strings.Contains(output, \"AWS Client ID\\n\\\"services.serviceA.environment.AWS_ACCESS_KEY_ID\\\": A3TX1234567890ABCDEF\"), output)\n\t\tassert.Assert(t, strings.Contains(output, \"AWS Secret Key\\n\\\"services.serviceA.environment.AWS_SECRET_ACCESS_KEY\\\": aws\\\"12345+67890/abcdefghijklm+NOPQRSTUVWXYZ+\\\"\"), output)\n\t\tassert.Assert(t, strings.Contains(output, \"Github authentication\\n\\\"GITHUB_TOKEN\\\": ghp_1234567890abcdefghijklmnopqrstuvwxyz\"), output)\n\t\tassert.Assert(t, strings.Contains(output, \"JSON Web Token\\n\\\"\\\": eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.\"+\n\t\t\t\"eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw\"), output)\n\t\tassert.Assert(t, strings.Contains(output, \"Private Key\\n\\\"\\\": -----BEGIN DSA PRIVATE KEY-----\\nwxyz+ABC=\\n-----END DSA PRIVATE KEY-----\"), output)\n\t})\n}\n\nfunc TestPublish(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"compose-e2e-publish\"\n\tconst registryName = projectName + \"-registry\"\n\tc.RunDockerCmd(t, \"run\", \"--name\", registryName, \"-P\", \"-d\", \"registry:3\")\n\tport := c.RunDockerCmd(t, \"inspect\", \"--format\", `{{ (index (index .NetworkSettings.Ports \"5000/tcp\") 0).HostPort }}`, registryName).Stdout()\n\tregistry := \"localhost:\" + strings.TrimSpace(port)\n\tt.Cleanup(func() {\n\t\tc.RunDockerCmd(t, \"rm\", \"--force\", registryName)\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/publish/oci/compose.yaml\", \"-f\", \"./fixtures/publish/oci/compose-override.yaml\",\n\t\t\"-p\", projectName, \"publish\", \"--with-env\", \"--yes\", \"--insecure-registry\", registry+\"/test:test\")\n\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\n\t// docker exec -it compose-e2e-publish-registry tree /var/lib/registry/docker/registry/v2/\n\n\tcmd := c.NewDockerComposeCmd(t, \"--verbose\", \"--project-name=oci\",\n\t\t\"--insecure-registry\", registry,\n\t\t\"-f\", fmt.Sprintf(\"oci://%s/test:test\", registry), \"config\")\n\tres = icmd.RunCmd(cmd, func(cmd *icmd.Cmd) {\n\t\tcmd.Env = append(cmd.Env, \"XDG_CACHE_HOME=\"+t.TempDir())\n\t})\n\tres.Assert(t, icmd.Expected{ExitCode: 0})\n\tassert.Equal(t, res.Stdout(), `name: oci\nservices:\n  app:\n    environment:\n      HELLO: WORLD\n    image: alpine\n    networks:\n      default: null\nnetworks:\n  default:\n    name: oci_default\n`)\n}\n"
  },
  {
    "path": "pkg/e2e/pull_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestComposePull(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Run(\"Verify image pulled\", func(t *testing.T) {\n\t\t// cleanup existing images\n\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/compose-pull/simple\", \"down\", \"--rmi\", \"all\")\n\n\t\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/compose-pull/simple\", \"pull\")\n\t\toutput := res.Combined()\n\n\t\tassert.Assert(t, strings.Contains(output, \"Image alpine:3.14 Pulled\"))\n\t\tassert.Assert(t, strings.Contains(output, \"Image alpine:3.15 Pulled\"))\n\n\t\t// verify default policy is 'always' for pull command\n\t\tres = c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/compose-pull/simple\", \"pull\")\n\t\toutput = res.Combined()\n\n\t\tassert.Assert(t, strings.Contains(output, \"Image alpine:3.14 Pulled\"))\n\t\tassert.Assert(t, strings.Contains(output, \"Image alpine:3.15 Pulled\"))\n\t})\n\n\tt.Run(\"Verify skipped pull if image is already present locally\", func(t *testing.T) {\n\t\t// make sure the required image is present\n\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/compose-pull/image-present-locally\", \"pull\")\n\n\t\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/compose-pull/image-present-locally\", \"pull\")\n\t\toutput := res.Combined()\n\n\t\tassert.Assert(t, strings.Contains(output, \"alpine:3.13.12 Skipped Image is already present locally\"))\n\t\t// image with :latest tag gets pulled regardless if pull_policy: missing or if_not_present\n\t\tassert.Assert(t, strings.Contains(output, \"alpine:latest Pulled\"))\n\t})\n\n\tt.Run(\"Verify skipped no image to be pulled\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/compose-pull/no-image-name-given\", \"pull\")\n\t\toutput := res.Combined()\n\n\t\tassert.Assert(t, strings.Contains(output, \"Skipped No image to be pulled\"))\n\t})\n\n\tt.Run(\"Verify pull failure\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"--project-directory\", \"fixtures/compose-pull/unknown-image\", \"pull\")\n\t\tres.Assert(t, icmd.Expected{ExitCode: 1, Err: \"pull access denied for does_not_exists\"})\n\t})\n\n\tt.Run(\"Verify ignore pull failure\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/compose-pull/unknown-image\", \"pull\", \"--ignore-pull-failures\")\n\t\tres.Assert(t, icmd.Expected{Err: \"Some service image(s) must be built from source by running:\"})\n\t})\n}\n"
  },
  {
    "path": "pkg/e2e/recreate_no_deps_test.go",
    "content": "/*\n   Copyright 2022 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"testing\"\n\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestRecreateWithNoDeps(t *testing.T) {\n\tc := NewParallelCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=recreate-no-deps\",\n\t))\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/dependencies/recreate-no-deps.yaml\", \"up\", \"-d\")\n\tres.Assert(t, icmd.Success)\n\n\tres = c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/dependencies/recreate-no-deps.yaml\", \"up\", \"-d\", \"--force-recreate\", \"--no-deps\", \"my-service\")\n\tres.Assert(t, icmd.Success)\n\n\tRequireServiceState(t, c, \"my-service\", \"running\")\n\n\tc.RunDockerComposeCmd(t, \"down\")\n}\n"
  },
  {
    "path": "pkg/e2e/restart_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\ttestify \"github.com/stretchr/testify/assert\"\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc assertServiceStatus(t *testing.T, projectName, service, status string, ps string) {\n\t// match output with random spaces like:\n\t// e2e-start-stop-db-1      alpine:latest \"echo hello\"     db\t1 minutes ago\tExited (0) 1 minutes ago\n\tregx := fmt.Sprintf(\"%s-%s-1.+%s\\\\s+.+%s.+\", projectName, service, service, status)\n\ttestify.Regexp(t, regx, ps)\n}\n\nfunc TestRestart(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-restart\"\n\n\tt.Run(\"Up a project\", func(t *testing.T) {\n\t\t// This is just to ensure the containers do NOT exist\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/restart-test/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"Container e2e-restart-restart-1 Started\"), res.Combined())\n\n\t\tc.WaitForCmdResult(t, c.NewDockerComposeCmd(t, \"--project-name\", projectName, \"ps\", \"-a\", \"--format\",\n\t\t\t\"json\"),\n\t\t\tStdoutContains(`\"State\":\"exited\"`), 10*time.Second, 1*time.Second)\n\n\t\tres = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"ps\", \"-a\")\n\t\tassertServiceStatus(t, projectName, \"restart\", \"Exited\", res.Stdout())\n\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/restart-test/compose.yaml\", \"--project-name\", projectName, \"restart\")\n\n\t\t// Give the same time but it must NOT exit\n\t\ttime.Sleep(time.Second)\n\n\t\tres = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"ps\")\n\t\tassertServiceStatus(t, projectName, \"restart\", \"Up\", res.Stdout())\n\n\t\t// Clean up\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n}\n\nfunc TestRestartWithDependencies(t *testing.T) {\n\tc := NewCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=e2e-restart-deps\",\n\t))\n\tbaseService := \"nginx\"\n\tdepWithRestart := \"with-restart\"\n\tdepNoRestart := \"no-restart\"\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"down\", \"--remove-orphans\")\n\t})\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/restart-test/compose-depends-on.yaml\", \"up\", \"-d\")\n\n\tres := c.RunDockerComposeCmd(t, \"restart\", baseService)\n\tout := res.Combined()\n\tassert.Assert(t, strings.Contains(out, fmt.Sprintf(\"Container e2e-restart-deps-%s-1 Restarting\", baseService)), out)\n\tassert.Assert(t, strings.Contains(out, fmt.Sprintf(\"Container e2e-restart-deps-%s-1 Healthy\", baseService)), out)\n\tassert.Assert(t, strings.Contains(out, fmt.Sprintf(\"Container e2e-restart-deps-%s-1 Started\", depWithRestart)), out)\n\tassert.Assert(t, !strings.Contains(out, depNoRestart), out)\n\n\tc = NewParallelCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=e2e-restart-deps\",\n\t\t\"LABEL=recreate\",\n\t))\n\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/restart-test/compose-depends-on.yaml\", \"up\", \"-d\")\n\tout = res.Combined()\n\tassert.Assert(t, strings.Contains(out, fmt.Sprintf(\"Container e2e-restart-deps-%s-1 Stopped\", depWithRestart)), out)\n\tassert.Assert(t, strings.Contains(out, fmt.Sprintf(\"Container e2e-restart-deps-%s-1 Recreated\", baseService)), out)\n\tassert.Assert(t, strings.Contains(out, fmt.Sprintf(\"Container e2e-restart-deps-%s-1 Healthy\", baseService)), out)\n\tassert.Assert(t, strings.Contains(out, fmt.Sprintf(\"Container e2e-restart-deps-%s-1 Started\", depWithRestart)), out)\n\tassert.Assert(t, strings.Contains(out, fmt.Sprintf(\"Container e2e-restart-deps-%s-1 Running\", depNoRestart)), out)\n}\n\nfunc TestRestartWithProfiles(t *testing.T) {\n\tc := NewParallelCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=e2e-restart-profiles\",\n\t))\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"down\", \"--remove-orphans\")\n\t})\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/restart-test/compose.yaml\", \"--profile\", \"test\", \"up\", \"-d\")\n\n\tres := c.RunDockerComposeCmd(t, \"restart\", \"test\")\n\tfmt.Println(res.Combined())\n\tassert.Assert(t, strings.Contains(res.Combined(), \"Container e2e-restart-profiles-test-1 Started\"), res.Combined())\n}\n"
  },
  {
    "path": "pkg/e2e/scale_test.go",
    "content": "/*\nCopyright 2020 Docker Compose CLI authors\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n\thttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\npackage e2e\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\t\"testing\"\n\n\ttestify \"github.com/stretchr/testify/assert\"\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nconst NO_STATE_TO_CHECK = \"\"\n\nfunc TestScaleBasicCases(t *testing.T) {\n\tc := NewCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=scale-basic-tests\"))\n\n\treset := func() {\n\t\tc.RunDockerComposeCmd(t, \"down\", \"--rmi\", \"all\")\n\t}\n\tt.Cleanup(reset)\n\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"up\", \"-d\")\n\tres.Assert(t, icmd.Success)\n\n\tt.Log(\"scale up one service\")\n\tres = c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"scale\", \"dbadmin=2\")\n\tout := res.Combined()\n\tcheckServiceContainer(t, out, \"scale-basic-tests-dbadmin\", \"Started\", 2)\n\n\tt.Log(\"scale up 2 services\")\n\tres = c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"scale\", \"front=3\", \"back=2\")\n\tout = res.Combined()\n\tcheckServiceContainer(t, out, \"scale-basic-tests-front\", \"Running\", 2)\n\tcheckServiceContainer(t, out, \"scale-basic-tests-front\", \"Started\", 1)\n\tcheckServiceContainer(t, out, \"scale-basic-tests-back\", \"Running\", 1)\n\tcheckServiceContainer(t, out, \"scale-basic-tests-back\", \"Started\", 1)\n\n\tt.Log(\"scale down one service\")\n\tres = c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"scale\", \"dbadmin=1\")\n\tout = res.Combined()\n\tcheckServiceContainer(t, out, \"scale-basic-tests-dbadmin\", \"Running\", 1)\n\n\tt.Log(\"scale to 0 a service\")\n\tres = c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"scale\", \"dbadmin=0\")\n\tassert.Check(t, res.Stdout() == \"\", res.Stdout())\n\n\tt.Log(\"scale down 2 services\")\n\tres = c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"scale\", \"front=2\", \"back=1\")\n\tout = res.Combined()\n\tcheckServiceContainer(t, out, \"scale-basic-tests-front\", \"Running\", 2)\n\tassert.Check(t, !strings.Contains(out, \"Container scale-basic-tests-front-3  Running\"), res.Combined())\n\tcheckServiceContainer(t, out, \"scale-basic-tests-back\", \"Running\", 1)\n}\n\nfunc TestScaleWithDepsCases(t *testing.T) {\n\tc := NewCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=scale-deps-tests\"))\n\n\treset := func() {\n\t\tc.RunDockerComposeCmd(t, \"down\", \"--rmi\", \"all\")\n\t}\n\tt.Cleanup(reset)\n\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"up\", \"-d\", \"--scale\", \"db=2\")\n\tres.Assert(t, icmd.Success)\n\n\tres = c.RunDockerComposeCmd(t, \"ps\")\n\tcheckServiceContainer(t, res.Combined(), \"scale-deps-tests-db\", NO_STATE_TO_CHECK, 2)\n\n\tt.Log(\"scale up 1 service with --no-deps\")\n\t_ = c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"scale\", \"--no-deps\", \"back=2\")\n\tres = c.RunDockerComposeCmd(t, \"ps\")\n\tcheckServiceContainer(t, res.Combined(), \"scale-deps-tests-back\", NO_STATE_TO_CHECK, 2)\n\tcheckServiceContainer(t, res.Combined(), \"scale-deps-tests-db\", NO_STATE_TO_CHECK, 2)\n\n\tt.Log(\"scale up 1 service without --no-deps\")\n\t_ = c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"scale\", \"back=2\")\n\tres = c.RunDockerComposeCmd(t, \"ps\")\n\tcheckServiceContainer(t, res.Combined(), \"scale-deps-tests-back\", NO_STATE_TO_CHECK, 2)\n\tcheckServiceContainer(t, res.Combined(), \"scale-deps-tests-db\", NO_STATE_TO_CHECK, 1)\n}\n\nfunc TestScaleUpAndDownPreserveContainerNumber(t *testing.T) {\n\tconst projectName = \"scale-up-down-test\"\n\n\tc := NewCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=\"+projectName))\n\n\treset := func() {\n\t\tc.RunDockerComposeCmd(t, \"down\", \"--rmi\", \"all\")\n\t}\n\tt.Cleanup(reset)\n\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"up\", \"-d\", \"--scale\", \"db=2\", \"db\")\n\tres.Assert(t, icmd.Success)\n\n\tres = c.RunDockerComposeCmd(t, \"ps\", \"--format\", \"{{.Name}}\", \"db\")\n\tres.Assert(t, icmd.Success)\n\tassert.Equal(t, strings.TrimSpace(res.Stdout()), projectName+\"-db-1\\n\"+projectName+\"-db-2\")\n\n\tt.Log(\"scale down removes replica #2\")\n\tres = c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"up\", \"-d\", \"--scale\", \"db=1\", \"db\")\n\tres.Assert(t, icmd.Success)\n\n\tres = c.RunDockerComposeCmd(t, \"ps\", \"--format\", \"{{.Name}}\", \"db\")\n\tres.Assert(t, icmd.Success)\n\tassert.Equal(t, strings.TrimSpace(res.Stdout()), projectName+\"-db-1\")\n\n\tt.Log(\"scale up restores replica #2\")\n\tres = c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"up\", \"-d\", \"--scale\", \"db=2\", \"db\")\n\tres.Assert(t, icmd.Success)\n\n\tres = c.RunDockerComposeCmd(t, \"ps\", \"--format\", \"{{.Name}}\", \"db\")\n\tres.Assert(t, icmd.Success)\n\tassert.Equal(t, strings.TrimSpace(res.Stdout()), projectName+\"-db-1\\n\"+projectName+\"-db-2\")\n}\n\nfunc TestScaleDownRemovesObsolete(t *testing.T) {\n\tconst projectName = \"scale-down-obsolete-test\"\n\tc := NewCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=\"+projectName))\n\n\treset := func() {\n\t\tc.RunDockerComposeCmd(t, \"down\", \"--rmi\", \"all\")\n\t}\n\tt.Cleanup(reset)\n\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"up\", \"-d\", \"db\")\n\tres.Assert(t, icmd.Success)\n\n\tres = c.RunDockerComposeCmd(t, \"ps\", \"--format\", \"{{.Name}}\", \"db\")\n\tres.Assert(t, icmd.Success)\n\tassert.Equal(t, strings.TrimSpace(res.Stdout()), projectName+\"-db-1\")\n\n\tcmd := c.NewDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"up\", \"-d\", \"--scale\", \"db=2\", \"db\")\n\tres = icmd.RunCmd(cmd, func(cmd *icmd.Cmd) {\n\t\tcmd.Env = append(cmd.Env, \"MAYBE=value\")\n\t})\n\tres.Assert(t, icmd.Success)\n\n\tres = c.RunDockerComposeCmd(t, \"ps\", \"--format\", \"{{.Name}}\", \"db\")\n\tres.Assert(t, icmd.Success)\n\tassert.Equal(t, strings.TrimSpace(res.Stdout()), projectName+\"-db-1\\n\"+projectName+\"-db-2\")\n\n\tt.Log(\"scale down removes obsolete replica #1\")\n\tcmd = c.NewDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"up\", \"-d\", \"--scale\", \"db=1\", \"db\")\n\tres = icmd.RunCmd(cmd, func(cmd *icmd.Cmd) {\n\t\tcmd.Env = append(cmd.Env, \"MAYBE=value\")\n\t})\n\tres.Assert(t, icmd.Success)\n\n\tres = c.RunDockerComposeCmd(t, \"ps\", \"--format\", \"{{.Name}}\", \"db\")\n\tres.Assert(t, icmd.Success)\n\tassert.Equal(t, strings.TrimSpace(res.Stdout()), projectName+\"-db-1\")\n}\n\nfunc checkServiceContainer(t *testing.T, stdout, containerName, containerState string, count int) {\n\tfound := 0\n\tlines := strings.SplitSeq(stdout, \"\\n\")\n\tfor line := range lines {\n\t\tif strings.Contains(line, containerName) && strings.Contains(line, containerState) {\n\t\t\tfound++\n\t\t}\n\t}\n\tif found == count {\n\t\treturn\n\t}\n\terrMessage := fmt.Sprintf(\"expected %d but found %d instance(s) of container %s in stoud\", count, found, containerName)\n\tif containerState != \"\" {\n\t\terrMessage += fmt.Sprintf(\" with expected state %s\", containerState)\n\t}\n\ttestify.Fail(t, errMessage, stdout)\n}\n\nfunc TestScaleDownNoRecreate(t *testing.T) {\n\tconst projectName = \"scale-down-recreated-test\"\n\tc := NewCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=\"+projectName))\n\n\treset := func() {\n\t\tc.RunDockerComposeCmd(t, \"down\", \"--rmi\", \"all\")\n\t}\n\tt.Cleanup(reset)\n\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/scale/build.yaml\", \"build\", \"--build-arg\", \"FOO=test\")\n\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/scale/build.yaml\", \"up\", \"-d\", \"--scale\", \"test=2\")\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/scale/build.yaml\", \"build\", \"--build-arg\", \"FOO=updated\")\n\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/scale/build.yaml\", \"up\", \"-d\", \"--scale\", \"test=4\", \"--no-recreate\")\n\n\tres := c.RunDockerComposeCmd(t, \"ps\", \"--format\", \"{{.Name}}\", \"test\")\n\tres.Assert(t, icmd.Success)\n\tassert.Check(t, strings.Contains(res.Stdout(), \"scale-down-recreated-test-test-1\"))\n\tassert.Check(t, strings.Contains(res.Stdout(), \"scale-down-recreated-test-test-2\"))\n\tassert.Check(t, strings.Contains(res.Stdout(), \"scale-down-recreated-test-test-3\"))\n\tassert.Check(t, strings.Contains(res.Stdout(), \"scale-down-recreated-test-test-4\"))\n\n\tt.Log(\"scale down removes obsolete replica #1 and #2\")\n\tc.NewDockerComposeCmd(t, \"--project-directory\", \"fixtures/scale\", \"up\", \"-d\", \"--scale\", \"test=2\")\n\n\tres = c.RunDockerComposeCmd(t, \"ps\", \"--format\", \"{{.Name}}\", \"test\")\n\tres.Assert(t, icmd.Success)\n\tassert.Check(t, strings.Contains(res.Stdout(), \"scale-down-recreated-test-test-3\"))\n\tassert.Check(t, strings.Contains(res.Stdout(), \"scale-down-recreated-test-test-4\"))\n}\n"
  },
  {
    "path": "pkg/e2e/secrets_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"testing\"\n\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestSecretFromEnv(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tdefer c.cleanupWithDown(t, \"env-secret\")\n\n\tt.Run(\"compose run\", func(t *testing.T) {\n\t\tres := icmd.RunCmd(c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/env-secret/compose.yaml\", \"run\", \"foo\"),\n\t\t\tfunc(cmd *icmd.Cmd) {\n\t\t\t\tcmd.Env = append(cmd.Env, \"SECRET=BAR\")\n\t\t\t})\n\t\tres.Assert(t, icmd.Expected{Out: \"BAR\"})\n\t})\n\tt.Run(\"secret uid\", func(t *testing.T) {\n\t\tres := icmd.RunCmd(c.NewDockerComposeCmd(t, \"-f\", \"./fixtures/env-secret/compose.yaml\", \"run\", \"foo\", \"ls\", \"-al\", \"/var/run/secrets/bar\"),\n\t\t\tfunc(cmd *icmd.Cmd) {\n\t\t\t\tcmd.Env = append(cmd.Env, \"SECRET=BAR\")\n\t\t\t})\n\t\tres.Assert(t, icmd.Expected{Out: \"-r--r-----    1 1005     1005\"})\n\t})\n}\n\nfunc TestSecretFromInclude(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tdefer c.cleanupWithDown(t, \"env-secret-include\")\n\n\tt.Run(\"compose run\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/env-secret/compose.yaml\", \"run\", \"included\")\n\t\tres.Assert(t, icmd.Expected{Out: \"this-is-secret\"})\n\t})\n}\n"
  },
  {
    "path": "pkg/e2e/start_stop_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\t\"testing\"\n\n\ttestify \"github.com/stretchr/testify/assert\"\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestStartStop(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-start-stop-no-dependencies\"\n\n\tgetProjectRegx := func(status string) string {\n\t\t// match output with random spaces like:\n\t\t// e2e-start-stop      running(3)\n\t\treturn fmt.Sprintf(\"%s\\\\s+%s\\\\(%d\\\\)\", projectName, status, 2)\n\t}\n\n\tt.Run(\"Up a project\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/start-stop/compose.yaml\", \"--project-name\", projectName, \"up\",\n\t\t\t\"-d\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"Container e2e-start-stop-no-dependencies-simple-1 Started\"), res.Combined())\n\n\t\tres = c.RunDockerComposeCmd(t, \"ls\", \"--all\")\n\t\ttestify.Regexp(t, getProjectRegx(\"running\"), res.Stdout())\n\n\t\tres = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"ps\")\n\t\tassertServiceStatus(t, projectName, \"simple\", \"Up\", res.Stdout())\n\t\tassertServiceStatus(t, projectName, \"another\", \"Up\", res.Stdout())\n\t})\n\n\tt.Run(\"stop project\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/start-stop/compose.yaml\", \"--project-name\", projectName, \"stop\")\n\n\t\tres := c.RunDockerComposeCmd(t, \"ls\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"e2e-start-stop-no-dependencies\"), res.Combined())\n\n\t\tres = c.RunDockerComposeCmd(t, \"ls\", \"--all\")\n\t\ttestify.Regexp(t, getProjectRegx(\"exited\"), res.Stdout())\n\n\t\tres = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"ps\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"e2e-start-stop-no-dependencies-words-1\"), res.Combined())\n\n\t\tres = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"ps\", \"--all\")\n\t\tassertServiceStatus(t, projectName, \"simple\", \"Exited\", res.Stdout())\n\t\tassertServiceStatus(t, projectName, \"another\", \"Exited\", res.Stdout())\n\t})\n\n\tt.Run(\"start project\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/start-stop/compose.yaml\", \"--project-name\", projectName, \"start\")\n\n\t\tres := c.RunDockerComposeCmd(t, \"ls\")\n\t\ttestify.Regexp(t, getProjectRegx(\"running\"), res.Stdout())\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n}\n\nfunc TestStartStopWithDependencies(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-start-stop-with-dependencies\"\n\n\tdefer c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"rm\", \"-fsv\")\n\n\tt.Run(\"Up\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/dependencies/compose.yaml\", \"--project-name\", projectName,\n\t\t\t\"up\", \"-d\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"Container e2e-start-stop-with-dependencies-foo-1 Started\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"Container e2e-start-stop-with-dependencies-bar-1 Started\"), res.Combined())\n\t})\n\n\tt.Run(\"stop foo\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"stop\", \"foo\")\n\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"Container e2e-start-stop-with-dependencies-foo-1 Stopped\"), res.Combined())\n\n\t\tres = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"ps\", \"--status\", \"running\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"e2e-start-stop-with-dependencies-bar-1\"), res.Combined())\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"e2e-start-stop-with-dependencies-foo-1\"), res.Combined())\n\t})\n\n\tt.Run(\"start foo\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"stop\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"Container e2e-start-stop-with-dependencies-bar-1 Stopped\"), res.Combined())\n\n\t\tres = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"start\", \"foo\")\n\t\tout := res.Combined()\n\t\tassert.Assert(t, strings.Contains(out, \"Container e2e-start-stop-with-dependencies-bar-1 Started\"), out)\n\t\tassert.Assert(t, strings.Contains(out, \"Container e2e-start-stop-with-dependencies-foo-1 Started\"), out)\n\n\t\tres = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"ps\", \"--status\", \"running\")\n\t\tout = res.Combined()\n\t\tassert.Assert(t, strings.Contains(out, \"e2e-start-stop-with-dependencies-bar-1\"), out)\n\t\tassert.Assert(t, strings.Contains(out, \"e2e-start-stop-with-dependencies-foo-1\"), out)\n\t})\n\n\tt.Run(\"Up no-deps links\", func(t *testing.T) {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/links/compose.yaml\", \"--project-name\", projectName, \"up\",\n\t\t\t\"--no-deps\", \"-d\", \"foo\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"Container e2e-start-stop-with-dependencies-foo-1 Started\"), res.Combined())\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"Container e2e-start-stop-with-dependencies-bar-1 Started\"), res.Combined())\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\t_ = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n}\n\nfunc TestStartStopWithOneOffs(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-start-stop-with-oneoffs\"\n\n\tt.Run(\"Up\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/dependencies/compose.yaml\", \"--project-name\", projectName,\n\t\t\t\"up\", \"-d\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"Container e2e-start-stop-with-oneoffs-foo-1 Started\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"Container e2e-start-stop-with-oneoffs-bar-1 Started\"), res.Combined())\n\t})\n\n\tt.Run(\"run one-off\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/dependencies/compose.yaml\", \"--project-name\", projectName, \"run\", \"-d\", \"bar\", \"sleep\", \"infinity\")\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"ps\", \"-a\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"e2e-start-stop-with-oneoffs-foo-1\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"e2e-start-stop-with-oneoffs-bar-1\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"e2e-start-stop-with-oneoffs-bar-run\"), res.Combined())\n\t})\n\n\tt.Run(\"stop (not one-off containers)\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"stop\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"e2e-start-stop-with-oneoffs-foo-1\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"e2e-start-stop-with-oneoffs-bar-1\"), res.Combined())\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"e2e_start_stop_with_oneoffs-bar-run\"), res.Combined())\n\n\t\tres = c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"ps\", \"-a\", \"--status\", \"running\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"e2e-start-stop-with-oneoffs-bar-run\"), res.Combined())\n\t})\n\n\tt.Run(\"start (not one-off containers)\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"start\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"e2e-start-stop-with-oneoffs-foo-1\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"e2e-start-stop-with-oneoffs-bar-1\"), res.Combined())\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"e2e-start-stop-with-oneoffs-bar-run\"), res.Combined())\n\t})\n\n\tt.Run(\"restart (not one-off containers)\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"restart\")\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"e2e-start-stop-with-oneoffs-foo-1\"), res.Combined())\n\t\tassert.Assert(t, strings.Contains(res.Combined(), \"e2e-start-stop-with-oneoffs-bar-1\"), res.Combined())\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"e2e-start-stop-with-oneoffs-bar-run\"), res.Combined())\n\t})\n\n\tt.Run(\"down\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--remove-orphans\")\n\n\t\tres := c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"ps\", \"-a\", \"--status\", \"running\")\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), \"e2e-start-stop-with-oneoffs-bar\"), res.Combined())\n\t})\n}\n\nfunc TestStartAlreadyRunning(t *testing.T) {\n\tcli := NewParallelCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=e2e-start-stop-svc-already-running\",\n\t\t\"COMPOSE_FILE=./fixtures/start-stop/compose.yaml\"))\n\tt.Cleanup(func() {\n\t\tcli.RunDockerComposeCmd(t, \"down\", \"--remove-orphans\", \"-v\", \"-t\", \"0\")\n\t})\n\n\tcli.RunDockerComposeCmd(t, \"up\", \"-d\", \"--wait\")\n\n\tres := cli.RunDockerComposeCmd(t, \"start\", \"simple\")\n\tassert.Equal(t, res.Stdout(), \"\", \"No output should have been written to stdout\")\n}\n\nfunc TestStopAlreadyStopped(t *testing.T) {\n\tcli := NewParallelCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=e2e-start-stop-svc-already-stopped\",\n\t\t\"COMPOSE_FILE=./fixtures/start-stop/compose.yaml\"))\n\tt.Cleanup(func() {\n\t\tcli.RunDockerComposeCmd(t, \"down\", \"--remove-orphans\", \"-v\", \"-t\", \"0\")\n\t})\n\n\tcli.RunDockerComposeCmd(t, \"up\", \"-d\", \"--wait\")\n\n\t// stop the container\n\tcli.RunDockerComposeCmd(t, \"stop\", \"simple\")\n\n\t// attempt to stop it again\n\tres := cli.RunDockerComposeCmdNoCheck(t, \"stop\", \"simple\")\n\t// TODO: for consistency, this should NOT write any output because the\n\t// \t\tcontainer is already stopped\n\tres.Assert(t, icmd.Expected{\n\t\tExitCode: 0,\n\t\tErr:      \"Container e2e-start-stop-svc-already-stopped-simple-1 Stopped\",\n\t})\n}\n\nfunc TestStartStopMultipleServices(t *testing.T) {\n\tcli := NewParallelCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=e2e-start-stop-svc-multiple\",\n\t\t\"COMPOSE_FILE=./fixtures/start-stop/compose.yaml\"))\n\tt.Cleanup(func() {\n\t\tcli.RunDockerComposeCmd(t, \"down\", \"--remove-orphans\", \"-v\", \"-t\", \"0\")\n\t})\n\n\tcli.RunDockerComposeCmd(t, \"up\", \"-d\", \"--wait\")\n\n\tres := cli.RunDockerComposeCmd(t, \"stop\", \"simple\", \"another\")\n\tservices := []string{\"simple\", \"another\"}\n\tfor _, svc := range services {\n\t\tstopMsg := fmt.Sprintf(\"Container e2e-start-stop-svc-multiple-%s-1 Stopped\", svc)\n\t\tassert.Assert(t, strings.Contains(res.Stderr(), stopMsg),\n\t\t\tfmt.Sprintf(\"Missing stop message for %s\\n%s\", svc, res.Combined()))\n\t}\n\n\tres = cli.RunDockerComposeCmd(t, \"start\", \"simple\", \"another\")\n\tfor _, svc := range services {\n\t\tstartMsg := fmt.Sprintf(\"Container e2e-start-stop-svc-multiple-%s-1 Started\", svc)\n\t\tassert.Assert(t, strings.Contains(res.Stderr(), startMsg),\n\t\t\tfmt.Sprintf(\"Missing start message for %s\\n%s\", svc, res.Combined()))\n\t}\n}\n\nfunc TestStartSingleServiceAndDependency(t *testing.T) {\n\tcli := NewParallelCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=e2e-start-single-deps\",\n\t\t\"COMPOSE_FILE=./fixtures/start-stop/start-stop-deps.yaml\"))\n\tt.Cleanup(func() {\n\t\tcli.RunDockerComposeCmd(t, \"down\", \"--remove-orphans\", \"-v\", \"-t\", \"0\")\n\t})\n\n\tcli.RunDockerComposeCmd(t, \"create\", \"desired\")\n\n\tres := cli.RunDockerComposeCmd(t, \"start\", \"desired\")\n\tdesiredServices := []string{\"desired\", \"dep_1\", \"dep_2\"}\n\tfor _, s := range desiredServices {\n\t\tstartMsg := fmt.Sprintf(\"Container e2e-start-single-deps-%s-1 Started\", s)\n\t\tassert.Assert(t, strings.Contains(res.Combined(), startMsg),\n\t\t\tfmt.Sprintf(\"Missing start message for service: %s\\n%s\", s, res.Combined()))\n\t}\n\tundesiredServices := []string{\"another\", \"another_2\"}\n\tfor _, s := range undesiredServices {\n\t\tassert.Assert(t, !strings.Contains(res.Combined(), s),\n\t\t\tfmt.Sprintf(\"Shouldn't have message for service: %s\\n%s\", s, res.Combined()))\n\t}\n}\n\nfunc TestStartStopMultipleFiles(t *testing.T) {\n\tcli := NewParallelCLI(t, WithEnv(\"COMPOSE_PROJECT_NAME=e2e-start-stop-svc-multiple-files\"))\n\tt.Cleanup(func() {\n\t\tcli.RunDockerComposeCmd(t, \"-p\", \"e2e-start-stop-svc-multiple-files\", \"down\", \"--remove-orphans\")\n\t})\n\n\tcli.RunDockerComposeCmd(t, \"-f\", \"./fixtures/start-stop/compose.yaml\", \"up\", \"-d\")\n\tcli.RunDockerComposeCmd(t, \"-f\", \"./fixtures/start-stop/other.yaml\", \"up\", \"-d\")\n\n\tres := cli.RunDockerComposeCmd(t, \"-f\", \"./fixtures/start-stop/compose.yaml\", \"stop\")\n\tassert.Assert(t, strings.Contains(res.Combined(), \"Container e2e-start-stop-svc-multiple-files-simple-1 Stopped\"), res.Combined())\n\tassert.Assert(t, strings.Contains(res.Combined(), \"Container e2e-start-stop-svc-multiple-files-another-1 Stopped\"), res.Combined())\n\tassert.Assert(t, !strings.Contains(res.Combined(), \"Container e2e-start-stop-svc-multiple-files-a-different-one-1 Stopped\"), res.Combined())\n\tassert.Assert(t, !strings.Contains(res.Combined(), \"Container e2e-start-stop-svc-multiple-files-and-another-one-1 Stopped\"), res.Combined())\n}\n"
  },
  {
    "path": "pkg/e2e/up_test.go",
    "content": "//go:build !windows\n\n/*\n   Copyright 2022 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os/exec\"\n\t\"strings\"\n\t\"syscall\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/require\"\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\nfunc TestUpServiceUnhealthy(t *testing.T) {\n\tc := NewParallelCLI(t)\n\tconst projectName = \"e2e-start-fail\"\n\n\tres := c.RunDockerComposeCmdNoCheck(t, \"-f\", \"fixtures/start-fail/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tres.Assert(t, icmd.Expected{ExitCode: 1, Err: `container e2e-start-fail-fail-1 is unhealthy`})\n\n\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n}\n\nfunc TestUpDependenciesNotStopped(t *testing.T) {\n\tc := NewParallelCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=up-deps-stop\",\n\t))\n\n\treset := func() {\n\t\tc.RunDockerComposeCmdNoCheck(t, \"down\", \"-t=0\", \"--remove-orphans\", \"-v\")\n\t}\n\treset()\n\tt.Cleanup(reset)\n\n\tt.Log(\"Launching orphan container (background)\")\n\tc.RunDockerComposeCmd(t,\n\t\t\"-f=./fixtures/ups-deps-stop/orphan.yaml\",\n\t\t\"up\",\n\t\t\"--wait\",\n\t\t\"--detach\",\n\t\t\"orphan\",\n\t)\n\tRequireServiceState(t, c, \"orphan\", \"running\")\n\n\tt.Log(\"Launching app container with implicit dependency\")\n\tupOut := &utils.SafeBuffer{}\n\ttestCmd := c.NewDockerComposeCmd(t,\n\t\t\"-f=./fixtures/ups-deps-stop/compose.yaml\",\n\t\t\"up\",\n\t\t\"--menu=false\",\n\t\t\"app\",\n\t)\n\n\tctx, cancel := context.WithTimeout(t.Context(), 15*time.Second)\n\tt.Cleanup(cancel)\n\n\tcmd, err := StartWithNewGroupID(ctx, testCmd, upOut, nil)\n\tassert.NilError(t, err, \"Failed to run compose up\")\n\n\tt.Log(\"Waiting for containers to be in running state\")\n\tupOut.RequireEventuallyContains(t, \"hello app\")\n\tRequireServiceState(t, c, \"app\", \"running\")\n\tRequireServiceState(t, c, \"dependency\", \"running\")\n\n\tt.Log(\"Simulating Ctrl-C\")\n\trequire.NoError(t, syscall.Kill(-cmd.Process.Pid, syscall.SIGINT),\n\t\t\"Failed to send SIGINT to compose up process\")\n\n\tt.Log(\"Waiting for `compose up` to exit\")\n\terr = cmd.Wait()\n\tif err != nil {\n\t\tvar exitErr *exec.ExitError\n\t\terrors.As(err, &exitErr)\n\t\tif exitErr.ExitCode() == -1 {\n\t\t\tt.Fatalf(\"`compose up` was killed: %v\", err)\n\t\t}\n\t\trequire.Equal(t, 130, exitErr.ExitCode())\n\t}\n\n\tRequireServiceState(t, c, \"app\", \"exited\")\n\t// dependency should still be running\n\tRequireServiceState(t, c, \"dependency\", \"running\")\n\tRequireServiceState(t, c, \"orphan\", \"running\")\n}\n\nfunc TestUpWithBuildDependencies(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Run(\"up with service using image build by an another service\", func(t *testing.T) {\n\t\t// ensure local test run does not reuse previously build image\n\t\tc.RunDockerOrExitError(t, \"rmi\", \"built-image-dependency\")\n\n\t\tres := c.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/dependencies\",\n\t\t\t\"-f\", \"fixtures/dependencies/service-image-depends-on.yaml\", \"up\", \"-d\")\n\n\t\tt.Cleanup(func() {\n\t\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/dependencies\",\n\t\t\t\t\"-f\", \"fixtures/dependencies/service-image-depends-on.yaml\", \"down\", \"--rmi\", \"all\")\n\t\t})\n\n\t\tres.Assert(t, icmd.Success)\n\t})\n}\n\nfunc TestUpWithDependencyExit(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tt.Run(\"up with dependency to exit before being healthy\", func(t *testing.T) {\n\t\tres := c.RunDockerComposeCmdNoCheck(t, \"--project-directory\", \"fixtures/dependencies\",\n\t\t\t\"-f\", \"fixtures/dependencies/dependency-exit.yaml\", \"up\", \"-d\")\n\n\t\tt.Cleanup(func() {\n\t\t\tc.RunDockerComposeCmd(t, \"--project-name\", \"dependencies\", \"down\")\n\t\t})\n\n\t\tres.Assert(t, icmd.Expected{ExitCode: 1, Err: \"dependency failed to start: container dependencies-db-1 exited (1)\"})\n\t})\n}\n\nfunc TestScaleDoesntRecreate(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"compose-e2e-scale\"\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"fixtures/simple-composefile/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"fixtures/simple-composefile/compose.yaml\", \"--project-name\", projectName, \"up\", \"--scale\", \"simple=2\", \"-d\")\n\tassert.Check(t, !strings.Contains(res.Combined(), \"Recreated\"))\n}\n\nfunc TestUpWithDependencyNotRequired(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"compose-e2e-dependency-not-required\"\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/dependencies/deps-not-required.yaml\", \"--project-name\", projectName,\n\t\t\"--profile\", \"not-required\", \"up\", \"-d\")\n\tassert.Assert(t, strings.Contains(res.Combined(), \"foo\"), res.Combined())\n\tassert.Assert(t, strings.Contains(res.Combined(), \" optional dependency \\\"bar\\\" failed to start\"), res.Combined())\n}\n\nfunc TestUpWithAllResources(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"compose-e2e-all-resources\"\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"-v\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/resources/compose.yaml\", \"--all-resources\", \"--project-name\", projectName, \"up\")\n\tassert.Assert(t, strings.Contains(res.Combined(), fmt.Sprintf(`Volume %s_my_vol Created`, projectName)), res.Combined())\n\tassert.Assert(t, strings.Contains(res.Combined(), fmt.Sprintf(`Network %s_my_net Created`, projectName)), res.Combined())\n}\n\nfunc TestUpProfile(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"compose-e2e-up-profile\"\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"--profile\", \"test\", \"down\", \"-v\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/profiles/docker-compose.yaml\", \"--project-name\", projectName, \"up\", \"foo\")\n\tassert.Assert(t, strings.Contains(res.Combined(), `Container db_c Created`), res.Combined())\n\tassert.Assert(t, strings.Contains(res.Combined(), `Container foo_c Created`), res.Combined())\n\tassert.Assert(t, !strings.Contains(res.Combined(), `Container bar_c Created`), res.Combined())\n}\n\nfunc TestUpImageID(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"compose-e2e-up-image-id\"\n\n\tdigest := strings.TrimSpace(c.RunDockerCmd(t, \"image\", \"inspect\", \"alpine\", \"-f\", \"{{ .ID }}\").Stdout())\n\t_, id, _ := strings.Cut(digest, \":\")\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"-v\")\n\t})\n\n\tc = NewCLI(t, WithEnv(fmt.Sprintf(\"ID=%s\", id)))\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/simple-composefile/id.yaml\", \"--project-name\", projectName, \"up\")\n}\n\nfunc TestUpStopWithLogsMixed(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"compose-e2e-stop-logs\"\n\n\tt.Cleanup(func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"-v\")\n\t})\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/stop/compose.yaml\", \"--project-name\", projectName, \"up\", \"--abort-on-container-exit\")\n\t// assert we still get service2 logs after service 1 Stopped event\n\tres.Assert(t, icmd.Expected{\n\t\tErr: \"Container compose-e2e-stop-logs-service1-1 Stopped\",\n\t})\n\t// assert we get stop hook logs\n\tres.Assert(t, icmd.Expected{Out: \"service2-1 ->  | stop hook running...\\nservice2-1     | 64 bytes\"})\n}\n"
  },
  {
    "path": "pkg/e2e/volumes_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestLocalComposeVolume(t *testing.T) {\n\tc := NewParallelCLI(t)\n\n\tconst projectName = \"compose-e2e-volume\"\n\n\tt.Run(\"up with build and no image name, volume\", func(t *testing.T) {\n\t\t// ensure local test run does not reuse previously build image\n\t\tc.RunDockerOrExitError(t, \"rmi\", \"compose-e2e-volume-nginx\")\n\t\tc.RunDockerOrExitError(t, \"volume\", \"rm\", projectName+\"-staticVol\")\n\t\tc.RunDockerOrExitError(t, \"volume\", \"rm\", \"myvolume\")\n\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/volume-test\", \"--project-name\", projectName, \"up\",\n\t\t\t\"-d\")\n\t})\n\n\tt.Run(\"access bind mount data\", func(t *testing.T) {\n\t\toutput := HTTPGetWithRetry(t, \"http://localhost:8090\", http.StatusOK, 2*time.Second, 20*time.Second)\n\t\tassert.Assert(t, strings.Contains(output, \"Hello from Nginx container\"))\n\t})\n\n\tt.Run(\"check container volume specs\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"inspect\", \"compose-e2e-volume-nginx2-1\", \"--format\", \"{{ json .Mounts }}\")\n\t\toutput := res.Stdout()\n\t\tassert.Assert(t, strings.Contains(output, `\"Destination\":\"/usr/src/app/node_modules\",\"Driver\":\"local\",\"Mode\":\"z\",\"RW\":true,\"Propagation\":\"\"`), output)\n\t\tassert.Assert(t, strings.Contains(output, `\"Destination\":\"/myconfig\",\"Mode\":\"\",\"RW\":false,\"Propagation\":\"rprivate\"`), output)\n\t})\n\n\tt.Run(\"check config content\", func(t *testing.T) {\n\t\toutput := c.RunDockerCmd(t, \"exec\", \"compose-e2e-volume-nginx2-1\", \"cat\", \"/myconfig\").Stdout()\n\t\tassert.Assert(t, strings.Contains(output, `Hello from Nginx container`), output)\n\t})\n\n\tt.Run(\"check secrets content\", func(t *testing.T) {\n\t\toutput := c.RunDockerCmd(t, \"exec\", \"compose-e2e-volume-nginx2-1\", \"cat\", \"/run/secrets/mysecret\").Stdout()\n\t\tassert.Assert(t, strings.Contains(output, `Hello from Nginx container`), output)\n\t})\n\n\tt.Run(\"check container bind-mounts specs\", func(t *testing.T) {\n\t\tres := c.RunDockerCmd(t, \"inspect\", \"compose-e2e-volume-nginx-1\", \"--format\", \"{{ json .Mounts }}\")\n\t\toutput := res.Stdout()\n\t\tassert.Assert(t, strings.Contains(output, `\"Type\":\"bind\"`))\n\t\tassert.Assert(t, strings.Contains(output, `\"Destination\":\"/usr/share/nginx/html\"`))\n\t})\n\n\tt.Run(\"should inherit anonymous volumes\", func(t *testing.T) {\n\t\tc.RunDockerOrExitError(t, \"exec\", \"compose-e2e-volume-nginx2-1\", \"touch\", \"/usr/src/app/node_modules/test\")\n\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/volume-test\", \"--project-name\", projectName, \"up\", \"--force-recreate\", \"-d\")\n\t\tc.RunDockerOrExitError(t, \"exec\", \"compose-e2e-volume-nginx2-1\", \"ls\", \"/usr/src/app/node_modules/test\")\n\t})\n\n\tt.Run(\"should renew anonymous volumes\", func(t *testing.T) {\n\t\tc.RunDockerOrExitError(t, \"exec\", \"compose-e2e-volume-nginx2-1\", \"touch\", \"/usr/src/app/node_modules/test\")\n\t\tc.RunDockerComposeCmd(t, \"--project-directory\", \"fixtures/volume-test\", \"--project-name\", projectName, \"up\", \"--force-recreate\", \"--renew-anon-volumes\", \"-d\")\n\t\tc.RunDockerOrExitError(t, \"exec\", \"compose-e2e-volume-nginx2-1\", \"ls\", \"/usr/src/app/node_modules/test\")\n\t})\n\n\tt.Run(\"cleanup volume project\", func(t *testing.T) {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--volumes\")\n\t\tls := c.RunDockerCmd(t, \"volume\", \"ls\").Stdout()\n\t\tassert.Assert(t, !strings.Contains(ls, projectName+\"-staticVol\"))\n\t\tassert.Assert(t, !strings.Contains(ls, \"myvolume\"))\n\t})\n}\n\nfunc TestProjectVolumeBind(t *testing.T) {\n\tif composeStandaloneMode {\n\t\tt.Skip()\n\t}\n\tc := NewParallelCLI(t)\n\tconst projectName = \"compose-e2e-project-volume-bind\"\n\n\tt.Run(\"up on project volume with bind specification\", func(t *testing.T) {\n\t\tif runtime.GOOS == \"windows\" {\n\t\t\tt.Skip(\"Running on Windows. Skipping...\")\n\t\t}\n\t\ttmpDir := t.TempDir()\n\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\n\t\tc.RunDockerOrExitError(t, \"volume\", \"rm\", \"-f\", projectName+\"_project-data\").Assert(t, icmd.Success)\n\t\tcmd := c.NewCmdWithEnv([]string{\"TEST_DIR=\" + tmpDir},\n\t\t\t\"docker\", \"compose\", \"--project-directory\", \"fixtures/project-volume-bind-test\", \"--project-name\", projectName, \"up\", \"-d\")\n\t\ticmd.RunCmd(cmd).Assert(t, icmd.Success)\n\t\tdefer c.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\")\n\n\t\tc.RunCmd(t, \"sh\", \"-c\", \"echo SUCCESS > \"+filepath.Join(tmpDir, \"resultfile\")).Assert(t, icmd.Success)\n\n\t\tret := c.RunDockerOrExitError(t, \"exec\", \"frontend\", \"bash\", \"-c\", \"cat /data/resultfile\").Assert(t, icmd.Success)\n\t\tassert.Assert(t, strings.Contains(ret.Stdout(), \"SUCCESS\"))\n\t})\n}\n\nfunc TestUpSwitchVolumes(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"compose-e2e-switch-volumes\"\n\tt.Cleanup(func() {\n\t\tc.cleanupWithDown(t, projectName)\n\t\tc.RunDockerCmd(t, \"volume\", \"rm\", \"-f\", \"test_external_volume\")\n\t\tc.RunDockerCmd(t, \"volume\", \"rm\", \"-f\", \"test_external_volume_2\")\n\t})\n\n\tc.RunDockerCmd(t, \"volume\", \"create\", \"test_external_volume\")\n\tc.RunDockerCmd(t, \"volume\", \"create\", \"test_external_volume_2\")\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/switch-volumes/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\n\tres := c.RunDockerCmd(t, \"inspect\", fmt.Sprintf(\"%s-app-1\", projectName), \"-f\", \"{{ (index .Mounts 0).Name }}\")\n\tres.Assert(t, icmd.Expected{Out: \"test_external_volume\"})\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/switch-volumes/compose2.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tres = c.RunDockerCmd(t, \"inspect\", fmt.Sprintf(\"%s-app-1\", projectName), \"-f\", \"{{ (index .Mounts 0).Name }}\")\n\tres.Assert(t, icmd.Expected{Out: \"test_external_volume_2\"})\n}\n\nfunc TestUpRecreateVolumes(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"compose-e2e-recreate-volumes\"\n\tt.Cleanup(func() {\n\t\tc.cleanupWithDown(t, projectName)\n\t})\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/recreate-volumes/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\n\tres := c.RunDockerCmd(t, \"volume\", \"inspect\", fmt.Sprintf(\"%s_my_vol\", projectName), \"-f\", \"{{ index .Labels \\\"foo\\\" }}\")\n\tres.Assert(t, icmd.Expected{Out: \"bar\"})\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/recreate-volumes/compose2.yaml\", \"--project-name\", projectName, \"up\", \"-d\", \"-y\")\n\tres = c.RunDockerCmd(t, \"volume\", \"inspect\", fmt.Sprintf(\"%s_my_vol\", projectName), \"-f\", \"{{ index .Labels \\\"foo\\\" }}\")\n\tres.Assert(t, icmd.Expected{Out: \"zot\"})\n}\n\nfunc TestUpRecreateVolumes_IgnoreBinds(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"compose-e2e-recreate-volumes\"\n\tt.Cleanup(func() {\n\t\tc.cleanupWithDown(t, projectName)\n\t})\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/recreate-volumes/bind.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/recreate-volumes/bind.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tassert.Check(t, !strings.Contains(res.Combined(), \"Recreated\"))\n}\n\nfunc TestImageVolume(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"compose-e2e-image-volume\"\n\tt.Cleanup(func() {\n\t\tc.cleanupWithDown(t, projectName)\n\t})\n\n\tversion := c.RunDockerCmd(t, \"version\", \"-f\", \"{{.Server.Version}}\")\n\tmajor, _, found := strings.Cut(version.Combined(), \".\")\n\tassert.Assert(t, found)\n\tif major == \"26\" || major == \"27\" {\n\t\tt.Skip(\"Skipping test due to docker version < 28\")\n\t}\n\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/volumes/compose.yaml\", \"--project-name\", projectName, \"up\", \"with_image\")\n\tout := res.Combined()\n\tassert.Check(t, strings.Contains(out, \"index.html\"))\n}\n\nfunc TestImageVolumeRecreateOnRebuild(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"compose-e2e-image-volume-recreate\"\n\tt.Cleanup(func() {\n\t\tc.cleanupWithDown(t, projectName)\n\t\tc.RunDockerOrExitError(t, \"rmi\", \"-f\", \"image-volume-source\")\n\t})\n\n\tversion := c.RunDockerCmd(t, \"version\", \"-f\", \"{{.Server.Version}}\")\n\tmajor, _, found := strings.Cut(version.Combined(), \".\")\n\tassert.Assert(t, found)\n\tif major == \"26\" || major == \"27\" {\n\t\tt.Skip(\"Skipping test due to docker version < 28\")\n\t}\n\n\t// First build and run with initial content\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/image-volume-recreate/compose.yaml\",\n\t\t\"--project-name\", projectName, \"build\", \"--build-arg\", \"CONTENT=foo\")\n\tres := c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/image-volume-recreate/compose.yaml\",\n\t\t\"--project-name\", projectName, \"up\", \"-d\")\n\tassert.Check(t, !strings.Contains(res.Combined(), \"error\"))\n\n\t// Check initial content\n\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/image-volume-recreate/compose.yaml\",\n\t\t\"--project-name\", projectName, \"logs\", \"consumer\")\n\tassert.Check(t, strings.Contains(res.Combined(), \"foo\"), \"Expected 'foo' in output, got: %s\", res.Combined())\n\n\t// Rebuild source image with different content\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/image-volume-recreate/compose.yaml\",\n\t\t\"--project-name\", projectName, \"build\", \"--build-arg\", \"CONTENT=bar\")\n\n\t// Run up again - consumer should be recreated because source image changed\n\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/image-volume-recreate/compose.yaml\",\n\t\t\"--project-name\", projectName, \"up\", \"-d\")\n\t// The consumer container should be recreated\n\tassert.Check(t, strings.Contains(res.Combined(), \"Recreate\") || strings.Contains(res.Combined(), \"Created\"),\n\t\t\"Expected container to be recreated, got: %s\", res.Combined())\n\n\t// Check updated content\n\tres = c.RunDockerComposeCmd(t, \"-f\", \"./fixtures/image-volume-recreate/compose.yaml\",\n\t\t\"--project-name\", projectName, \"logs\", \"consumer\")\n\tassert.Check(t, strings.Contains(res.Combined(), \"bar\"), \"Expected 'bar' in output after rebuild, got: %s\", res.Combined())\n}\n"
  },
  {
    "path": "pkg/e2e/wait_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/icmd\"\n)\n\nfunc TestWaitOnFaster(t *testing.T) {\n\tconst projectName = \"e2e-wait-faster\"\n\tc := NewParallelCLI(t)\n\n\tcleanup := func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--timeout=0\", \"--remove-orphans\")\n\t}\n\tt.Cleanup(cleanup)\n\tcleanup()\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/wait/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"wait\", \"faster\")\n}\n\nfunc TestWaitOnSlower(t *testing.T) {\n\tconst projectName = \"e2e-wait-slower\"\n\tc := NewParallelCLI(t)\n\n\tcleanup := func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--timeout=0\", \"--remove-orphans\")\n\t}\n\tt.Cleanup(cleanup)\n\tcleanup()\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/wait/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"wait\", \"slower\")\n}\n\nfunc TestWaitOnInfinity(t *testing.T) {\n\tconst projectName = \"e2e-wait-infinity\"\n\tc := NewParallelCLI(t)\n\n\tcleanup := func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--timeout=0\", \"--remove-orphans\")\n\t}\n\tt.Cleanup(cleanup)\n\tcleanup()\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/wait/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\n\tcmd := c.NewDockerComposeCmd(t, \"--project-name\", projectName, \"wait\", \"infinity\")\n\tr := icmd.StartCmd(cmd)\n\tassert.NilError(t, r.Error)\n\tt.Cleanup(func() {\n\t\tif r.Cmd.Process != nil {\n\t\t\t_ = r.Cmd.Process.Kill()\n\t\t}\n\t})\n\n\tfinished := make(chan struct{})\n\tticker := time.NewTicker(7 * time.Second)\n\tgo func() {\n\t\t_ = r.Cmd.Wait()\n\t\tfinished <- struct{}{}\n\t}()\n\n\tselect {\n\tcase <-finished:\n\t\tt.Fatal(\"wait infinity should not finish\")\n\tcase <-ticker.C:\n\t}\n}\n\nfunc TestWaitAndDrop(t *testing.T) {\n\tconst projectName = \"e2e-wait-and-drop\"\n\tc := NewParallelCLI(t)\n\n\tcleanup := func() {\n\t\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"down\", \"--timeout=0\", \"--remove-orphans\")\n\t}\n\tt.Cleanup(cleanup)\n\tcleanup()\n\n\tc.RunDockerComposeCmd(t, \"-f\", \"./fixtures/wait/compose.yaml\", \"--project-name\", projectName, \"up\", \"-d\")\n\tc.RunDockerComposeCmd(t, \"--project-name\", projectName, \"wait\", \"--down-project\", \"faster\")\n\n\tres := c.RunDockerCmd(t, \"ps\", \"--all\")\n\tassert.Assert(t, !strings.Contains(res.Combined(), projectName), res.Combined())\n}\n"
  },
  {
    "path": "pkg/e2e/watch_test.go",
    "content": "/*\n   Copyright 2023 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage e2e\n\nimport (\n\t\"bytes\"\n\t\"crypto/rand\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"sync/atomic\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/require\"\n\t\"gotest.tools/v3/assert\"\n\t\"gotest.tools/v3/assert/cmp\"\n\t\"gotest.tools/v3/icmd\"\n\t\"gotest.tools/v3/poll\"\n)\n\nfunc TestWatch(t *testing.T) {\n\tservices := []string{\"alpine\", \"busybox\", \"debian\"}\n\tfor _, svcName := range services {\n\t\tt.Run(svcName, func(t *testing.T) {\n\t\t\tt.Helper()\n\t\t\tdoTest(t, svcName)\n\t\t})\n\t}\n}\n\nfunc TestRebuildOnDotEnvWithExternalNetwork(t *testing.T) {\n\tconst projectName = \"test_rebuild_on_dotenv_with_external_network\"\n\tconst svcName = \"ext-alpine\"\n\tcontainerName := strings.Join([]string{projectName, svcName, \"1\"}, \"-\")\n\tconst networkName = \"e2e-watch-external_network_test\"\n\tconst dotEnvFilepath = \"./fixtures/watch/.env\"\n\n\tc := NewCLI(t, WithEnv(\n\t\t\"COMPOSE_PROJECT_NAME=\"+projectName,\n\t\t\"COMPOSE_FILE=./fixtures/watch/with-external-network.yaml\",\n\t))\n\n\tcleanup := func() {\n\t\tc.RunDockerComposeCmdNoCheck(t, \"down\", \"--remove-orphans\", \"--volumes\", \"--rmi=local\")\n\t\tc.RunDockerOrExitError(t, \"network\", \"rm\", networkName)\n\t\tos.Remove(dotEnvFilepath) //nolint:errcheck\n\t}\n\tcleanup()\n\n\tt.Log(\"create network that is referenced by the container we're testing\")\n\tc.RunDockerCmd(t, \"network\", \"create\", networkName)\n\tres := c.RunDockerCmd(t, \"network\", \"ls\")\n\tassert.Assert(t, !strings.Contains(res.Combined(), projectName), res.Combined())\n\n\tt.Log(\"create a dotenv file that will be used to trigger the rebuild\")\n\terr := os.WriteFile(dotEnvFilepath, []byte(\"HELLO=WORLD\"), 0o666)\n\tassert.NilError(t, err)\n\t_, err = os.ReadFile(dotEnvFilepath)\n\tassert.NilError(t, err)\n\n\t// TODO: refactor this duplicated code into frameworks? Maybe?\n\tt.Log(\"starting docker compose watch\")\n\tcmd := c.NewDockerComposeCmd(t, \"--verbose\", \"watch\", svcName)\n\t// stream output since watch runs in the background\n\tcmd.Stdout = os.Stdout\n\tcmd.Stderr = os.Stderr\n\tr := icmd.StartCmd(cmd)\n\trequire.NoError(t, r.Error)\n\tvar testComplete atomic.Bool\n\tgo func() {\n\t\t// if the process exits abnormally before the test is done, fail the test\n\t\tif err := r.Cmd.Wait(); err != nil && !t.Failed() && !testComplete.Load() {\n\t\t\tassert.Check(t, cmp.Nil(err))\n\t\t}\n\t}()\n\n\tt.Log(\"wait for watch to start watching\")\n\tc.WaitForCondition(t, func() (bool, string) {\n\t\tout := r.String()\n\t\treturn strings.Contains(out, \"Watch enabled\"), \"watch not started\"\n\t}, 30*time.Second, 1*time.Second)\n\n\tpn := c.RunDockerCmd(t, \"inspect\", containerName, \"-f\", \"{{ .HostConfig.NetworkMode }}\")\n\tassert.Equal(t, strings.TrimSpace(pn.Stdout()), networkName)\n\n\tt.Log(\"create a dotenv file that will be used to trigger the rebuild\")\n\terr = os.WriteFile(dotEnvFilepath, []byte(\"HELLO=WORLD\\nTEST=REBUILD\"), 0o666)\n\tassert.NilError(t, err)\n\t_, err = os.ReadFile(dotEnvFilepath)\n\tassert.NilError(t, err)\n\n\t// NOTE: are there any other ways to check if the container has been rebuilt?\n\tt.Log(\"check if the container has been rebuild\")\n\tc.WaitForCondition(t, func() (bool, string) {\n\t\tout := r.String()\n\t\tif strings.Count(out, \"batch complete\") != 1 {\n\t\t\treturn false, fmt.Sprintf(\"container %s was not rebuilt\", containerName)\n\t\t}\n\t\treturn true, fmt.Sprintf(\"container %s was rebuilt\", containerName)\n\t}, 30*time.Second, 1*time.Second)\n\n\tpn2 := c.RunDockerCmd(t, \"inspect\", containerName, \"-f\", \"{{ .HostConfig.NetworkMode }}\")\n\tassert.Equal(t, strings.TrimSpace(pn2.Stdout()), networkName)\n\n\tassert.Check(t, !strings.Contains(r.Combined(), \"Application failed to start after update\"))\n\n\tt.Cleanup(cleanup)\n\tt.Cleanup(func() {\n\t\t// IMPORTANT: watch doesn't exit on its own, don't leak processes!\n\t\tif r.Cmd.Process != nil {\n\t\t\tt.Logf(\"Killing watch process: pid[%d]\", r.Cmd.Process.Pid)\n\t\t\t_ = r.Cmd.Process.Kill()\n\t\t}\n\t})\n\ttestComplete.Store(true)\n}\n\n// NOTE: these tests all share a single Compose file but are safe to run\n// concurrently (though that's not recommended).\nfunc doTest(t *testing.T, svcName string) {\n\ttmpdir := t.TempDir()\n\tdataDir := filepath.Join(tmpdir, \"data\")\n\tconfigDir := filepath.Join(tmpdir, \"config\")\n\n\twriteTestFile := func(name, contents, sourceDir string) {\n\t\tt.Helper()\n\t\tdest := filepath.Join(sourceDir, name)\n\t\trequire.NoError(t, os.MkdirAll(filepath.Dir(dest), 0o700))\n\t\tt.Logf(\"writing %q to %q\", contents, dest)\n\t\trequire.NoError(t, os.WriteFile(dest, []byte(contents+\"\\n\"), 0o600))\n\t}\n\twriteDataFile := func(name, contents string) {\n\t\twriteTestFile(name, contents, dataDir)\n\t}\n\n\tcomposeFilePath := filepath.Join(tmpdir, \"compose.yaml\")\n\tCopyFile(t, filepath.Join(\"fixtures\", \"watch\", \"compose.yaml\"), composeFilePath)\n\n\tprojName := \"e2e-watch-\" + svcName\n\tenv := []string{\n\t\t\"COMPOSE_FILE=\" + composeFilePath,\n\t\t\"COMPOSE_PROJECT_NAME=\" + projName,\n\t}\n\n\tcli := NewCLI(t, WithEnv(env...))\n\n\t// important that --rmi is used to prune the images and ensure that watch builds on launch\n\tdefer cli.cleanupWithDown(t, projName, \"--rmi=local\")\n\n\tcmd := cli.NewDockerComposeCmd(t, \"--verbose\", \"watch\", svcName)\n\t// stream output since watch runs in the background\n\tcmd.Stdout = os.Stdout\n\tcmd.Stderr = os.Stderr\n\tr := icmd.StartCmd(cmd)\n\trequire.NoError(t, r.Error)\n\tt.Cleanup(func() {\n\t\t// IMPORTANT: watch doesn't exit on its own, don't leak processes!\n\t\tif r.Cmd.Process != nil {\n\t\t\tt.Logf(\"Killing watch process: pid[%d]\", r.Cmd.Process.Pid)\n\t\t\t_ = r.Cmd.Process.Kill()\n\t\t}\n\t})\n\tvar testComplete atomic.Bool\n\tgo func() {\n\t\t// if the process exits abnormally before the test is done, fail the test\n\t\tif err := r.Cmd.Wait(); err != nil && !t.Failed() && !testComplete.Load() {\n\t\t\tassert.Check(t, cmp.Nil(err))\n\t\t}\n\t}()\n\n\trequire.NoError(t, os.Mkdir(dataDir, 0o700))\n\n\tcheckFileContents := func(path string, contents string) poll.Check {\n\t\treturn func(pollLog poll.LogT) poll.Result {\n\t\t\tif r.Cmd.ProcessState != nil {\n\t\t\t\treturn poll.Error(fmt.Errorf(\"watch process exited early: %s\", r.Cmd.ProcessState))\n\t\t\t}\n\t\t\tres := icmd.RunCmd(cli.NewDockerComposeCmd(t, \"exec\", svcName, \"cat\", path))\n\t\t\tif strings.Contains(res.Stdout(), contents) {\n\t\t\t\treturn poll.Success()\n\t\t\t}\n\t\t\treturn poll.Continue(\"%v\", res.Combined())\n\t\t}\n\t}\n\n\twaitForFlush := func() {\n\t\tb := make([]byte, 32)\n\t\t_, _ = rand.Read(b)\n\t\tsentinelVal := fmt.Sprintf(\"%x\", b)\n\t\twriteDataFile(\"wait.txt\", sentinelVal)\n\t\tpoll.WaitOn(t, checkFileContents(\"/app/data/wait.txt\", sentinelVal))\n\t}\n\n\tt.Logf(\"Writing to a file until Compose watch is up and running\")\n\tpoll.WaitOn(t, func(t poll.LogT) poll.Result {\n\t\twriteDataFile(\"hello.txt\", \"hello world\")\n\t\treturn checkFileContents(\"/app/data/hello.txt\", \"hello world\")(t)\n\t}, poll.WithDelay(time.Second))\n\n\tt.Logf(\"Modifying file contents\")\n\twriteDataFile(\"hello.txt\", \"hello watch\")\n\tpoll.WaitOn(t, checkFileContents(\"/app/data/hello.txt\", \"hello watch\"))\n\n\tt.Logf(\"Deleting file\")\n\trequire.NoError(t, os.Remove(filepath.Join(dataDir, \"hello.txt\")))\n\twaitForFlush()\n\tcli.RunDockerComposeCmdNoCheck(t, \"exec\", svcName, \"stat\", \"/app/data/hello.txt\").\n\t\tAssert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      \"No such file or directory\",\n\t\t})\n\n\tt.Logf(\"Writing to ignored paths\")\n\twriteDataFile(\"data.foo\", \"ignored\")\n\twriteDataFile(filepath.Join(\"ignored\", \"hello.txt\"), \"ignored\")\n\twaitForFlush()\n\tcli.RunDockerComposeCmdNoCheck(t, \"exec\", svcName, \"stat\", \"/app/data/data.foo\").\n\t\tAssert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      \"No such file or directory\",\n\t\t})\n\tcli.RunDockerComposeCmdNoCheck(t, \"exec\", svcName, \"stat\", \"/app/data/ignored\").\n\t\tAssert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      \"No such file or directory\",\n\t\t})\n\n\tt.Logf(\"Creating subdirectory\")\n\trequire.NoError(t, os.Mkdir(filepath.Join(dataDir, \"subdir\"), 0o700))\n\twaitForFlush()\n\tcli.RunDockerComposeCmd(t, \"exec\", svcName, \"stat\", \"/app/data/subdir\")\n\n\tt.Logf(\"Writing to file in subdirectory\")\n\twriteDataFile(filepath.Join(\"subdir\", \"file.txt\"), \"a\")\n\tpoll.WaitOn(t, checkFileContents(\"/app/data/subdir/file.txt\", \"a\"))\n\n\tt.Logf(\"Writing to file multiple times\")\n\twriteDataFile(filepath.Join(\"subdir\", \"file.txt\"), \"x\")\n\twriteDataFile(filepath.Join(\"subdir\", \"file.txt\"), \"y\")\n\twriteDataFile(filepath.Join(\"subdir\", \"file.txt\"), \"z\")\n\tpoll.WaitOn(t, checkFileContents(\"/app/data/subdir/file.txt\", \"z\"))\n\twriteDataFile(filepath.Join(\"subdir\", \"file.txt\"), \"z\")\n\twriteDataFile(filepath.Join(\"subdir\", \"file.txt\"), \"y\")\n\twriteDataFile(filepath.Join(\"subdir\", \"file.txt\"), \"x\")\n\tpoll.WaitOn(t, checkFileContents(\"/app/data/subdir/file.txt\", \"x\"))\n\n\tt.Logf(\"Deleting directory\")\n\trequire.NoError(t, os.RemoveAll(filepath.Join(dataDir, \"subdir\")))\n\twaitForFlush()\n\tcli.RunDockerComposeCmdNoCheck(t, \"exec\", svcName, \"stat\", \"/app/data/subdir\").\n\t\tAssert(t, icmd.Expected{\n\t\t\tExitCode: 1,\n\t\t\tErr:      \"No such file or directory\",\n\t\t})\n\n\tt.Logf(\"Sync and restart use case\")\n\trequire.NoError(t, os.Mkdir(configDir, 0o700))\n\twriteTestFile(\"file.config\", \"This is an updated config file\", configDir)\n\tcheckRestart := func(state string) poll.Check {\n\t\treturn func(pollLog poll.LogT) poll.Result {\n\t\t\tif strings.Contains(r.Combined(), state) {\n\t\t\t\treturn poll.Success()\n\t\t\t}\n\t\t\treturn poll.Continue(\"%v\", r.Combined())\n\t\t}\n\t}\n\tpoll.WaitOn(t, checkRestart(fmt.Sprintf(\"service(s) [%q] restarted\", svcName)))\n\tpoll.WaitOn(t, checkFileContents(\"/app/config/file.config\", \"This is an updated config file\"))\n\n\ttestComplete.Store(true)\n}\n\nfunc TestWatchExec(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"test_watch_exec\"\n\n\tdefer c.cleanupWithDown(t, projectName)\n\n\ttmpdir := t.TempDir()\n\tcomposeFilePath := filepath.Join(tmpdir, \"compose.yaml\")\n\tCopyFile(t, filepath.Join(\"fixtures\", \"watch\", \"exec.yaml\"), composeFilePath)\n\tcmd := c.NewDockerComposeCmd(t, \"-p\", projectName, \"-f\", composeFilePath, \"up\", \"--watch\")\n\tbuffer := bytes.NewBuffer(nil)\n\tcmd.Stdout = buffer\n\twatch := icmd.StartCmd(cmd)\n\n\tpoll.WaitOn(t, func(l poll.LogT) poll.Result {\n\t\tout := buffer.String()\n\t\tif strings.Contains(out, \"64 bytes from\") {\n\t\t\treturn poll.Success()\n\t\t}\n\t\treturn poll.Continue(\"%v\", watch.Stdout())\n\t})\n\n\tt.Logf(\"Create new file\")\n\n\ttestFile := filepath.Join(tmpdir, \"test\")\n\trequire.NoError(t, os.WriteFile(testFile, []byte(\"test\\n\"), 0o600))\n\n\tpoll.WaitOn(t, func(l poll.LogT) poll.Result {\n\t\tout := buffer.String()\n\t\tif strings.Contains(out, \"SUCCESS\") {\n\t\t\treturn poll.Success()\n\t\t}\n\t\treturn poll.Continue(\"%v\", out)\n\t})\n\tc.RunDockerComposeCmdNoCheck(t, \"-p\", projectName, \"kill\", \"-s\", \"9\")\n}\n\nfunc TestWatchMultiServices(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"test_watch_rebuild\"\n\n\tdefer c.cleanupWithDown(t, projectName)\n\n\ttmpdir := t.TempDir()\n\tcomposeFilePath := filepath.Join(tmpdir, \"compose.yaml\")\n\tCopyFile(t, filepath.Join(\"fixtures\", \"watch\", \"rebuild.yaml\"), composeFilePath)\n\n\ttestFile := filepath.Join(tmpdir, \"test\")\n\trequire.NoError(t, os.WriteFile(testFile, []byte(\"test\"), 0o600))\n\n\tcmd := c.NewDockerComposeCmd(t, \"-p\", projectName, \"-f\", composeFilePath, \"up\", \"--watch\")\n\tbuffer := bytes.NewBuffer(nil)\n\tcmd.Stdout = buffer\n\twatch := icmd.StartCmd(cmd)\n\n\tpoll.WaitOn(t, func(l poll.LogT) poll.Result {\n\t\tif strings.Contains(watch.Stdout(), \"Attaching to \") {\n\t\t\treturn poll.Success()\n\t\t}\n\t\treturn poll.Continue(\"%v\", watch.Stdout())\n\t})\n\n\twaitRebuild := func(service string, expected string) {\n\t\tpoll.WaitOn(t, func(l poll.LogT) poll.Result {\n\t\t\tcat := c.RunDockerComposeCmdNoCheck(t, \"-p\", projectName, \"exec\", service, \"cat\", \"/data/\"+service)\n\t\t\tif strings.Contains(cat.Stdout(), expected) {\n\t\t\t\treturn poll.Success()\n\t\t\t}\n\t\t\treturn poll.Continue(\"%v\", cat.Combined())\n\t\t})\n\t}\n\twaitRebuild(\"a\", \"test\")\n\twaitRebuild(\"b\", \"test\")\n\twaitRebuild(\"c\", \"test\")\n\n\trequire.NoError(t, os.WriteFile(testFile, []byte(\"updated\"), 0o600))\n\twaitRebuild(\"a\", \"updated\")\n\twaitRebuild(\"b\", \"updated\")\n\twaitRebuild(\"c\", \"updated\")\n\n\tc.RunDockerComposeCmdNoCheck(t, \"-p\", projectName, \"kill\", \"-s\", \"9\")\n}\n\nfunc TestWatchIncludes(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"test_watch_includes\"\n\n\tdefer c.cleanupWithDown(t, projectName)\n\n\ttmpdir := t.TempDir()\n\tcomposeFilePath := filepath.Join(tmpdir, \"compose.yaml\")\n\tCopyFile(t, filepath.Join(\"fixtures\", \"watch\", \"include.yaml\"), composeFilePath)\n\n\tcmd := c.NewDockerComposeCmd(t, \"-p\", projectName, \"-f\", composeFilePath, \"up\", \"--watch\")\n\tbuffer := bytes.NewBuffer(nil)\n\tcmd.Stdout = buffer\n\twatch := icmd.StartCmd(cmd)\n\n\tpoll.WaitOn(t, func(l poll.LogT) poll.Result {\n\t\tif strings.Contains(watch.Stdout(), \"Attaching to \") {\n\t\t\treturn poll.Success()\n\t\t}\n\t\treturn poll.Continue(\"%v\", watch.Stdout())\n\t})\n\n\trequire.NoError(t, os.WriteFile(filepath.Join(tmpdir, \"B.test\"), []byte(\"test\"), 0o600))\n\trequire.NoError(t, os.WriteFile(filepath.Join(tmpdir, \"A.test\"), []byte(\"test\"), 0o600))\n\n\tpoll.WaitOn(t, func(l poll.LogT) poll.Result {\n\t\tcat := c.RunDockerComposeCmdNoCheck(t, \"-p\", projectName, \"exec\", \"a\", \"ls\", \"/data/\")\n\t\tif strings.Contains(cat.Stdout(), \"A.test\") {\n\t\t\tassert.Check(t, !strings.Contains(cat.Stdout(), \"B.test\"))\n\t\t\treturn poll.Success()\n\t\t}\n\t\treturn poll.Continue(\"%v\", cat.Combined())\n\t})\n\n\tc.RunDockerComposeCmdNoCheck(t, \"-p\", projectName, \"kill\", \"-s\", \"9\")\n}\n\nfunc TestCheckWarningXInitialSyn(t *testing.T) {\n\tc := NewCLI(t)\n\tconst projectName = \"test_watch_warn_initial_syn\"\n\n\tdefer c.cleanupWithDown(t, projectName)\n\n\ttmpdir := t.TempDir()\n\tcomposeFilePath := filepath.Join(tmpdir, \"compose.yaml\")\n\tCopyFile(t, filepath.Join(\"fixtures\", \"watch\", \"x-initialSync.yaml\"), composeFilePath)\n\tcmd := c.NewDockerComposeCmd(t, \"-p\", projectName, \"-f\", composeFilePath, \"--verbose\", \"up\", \"--watch\")\n\tbuffer := bytes.NewBuffer(nil)\n\tcmd.Stdout = buffer\n\twatch := icmd.StartCmd(cmd)\n\n\tpoll.WaitOn(t, func(l poll.LogT) poll.Result {\n\t\tif strings.Contains(watch.Combined(), \"x-initialSync is DEPRECATED, please use the official `initial_sync` attribute\") {\n\t\t\treturn poll.Success()\n\t\t}\n\t\treturn poll.Continue(\"%v\", watch.Stdout())\n\t})\n\n\tc.RunDockerComposeCmdNoCheck(t, \"-p\", projectName, \"kill\", \"-s\", \"9\")\n}\n"
  },
  {
    "path": "pkg/mocks/mock_docker_api.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: github.com/moby/moby/client (interfaces: APIClient)\n//\n// Generated by this command:\n//\n//\tmockgen -destination pkg/mocks/mock_docker_api.go -package mocks github.com/moby/moby/client APIClient\n//\n\n// Package mocks is a generated GoMock package.\npackage mocks\n\nimport (\n\tcontext \"context\"\n\tio \"io\"\n\tnet \"net\"\n\treflect \"reflect\"\n\n\tclient \"github.com/moby/moby/client\"\n\tgomock \"go.uber.org/mock/gomock\"\n)\n\n// MockAPIClient is a mock of APIClient interface.\ntype MockAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockAPIClientMockRecorder\n}\n\n// MockAPIClientMockRecorder is the mock recorder for MockAPIClient.\ntype MockAPIClientMockRecorder struct {\n\tmock *MockAPIClient\n}\n\n// NewMockAPIClient creates a new mock instance.\nfunc NewMockAPIClient(ctrl *gomock.Controller) *MockAPIClient {\n\tmock := &MockAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use.\nfunc (m *MockAPIClient) EXPECT() *MockAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// BuildCachePrune mocks base method.\nfunc (m *MockAPIClient) BuildCachePrune(arg0 context.Context, arg1 client.BuildCachePruneOptions) (client.BuildCachePruneResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"BuildCachePrune\", arg0, arg1)\n\tret0, _ := ret[0].(client.BuildCachePruneResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// BuildCachePrune indicates an expected call of BuildCachePrune.\nfunc (mr *MockAPIClientMockRecorder) BuildCachePrune(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"BuildCachePrune\", reflect.TypeOf((*MockAPIClient)(nil).BuildCachePrune), arg0, arg1)\n}\n\n// BuildCancel mocks base method.\nfunc (m *MockAPIClient) BuildCancel(arg0 context.Context, arg1 string, arg2 client.BuildCancelOptions) (client.BuildCancelResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"BuildCancel\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.BuildCancelResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// BuildCancel indicates an expected call of BuildCancel.\nfunc (mr *MockAPIClientMockRecorder) BuildCancel(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"BuildCancel\", reflect.TypeOf((*MockAPIClient)(nil).BuildCancel), arg0, arg1, arg2)\n}\n\n// CheckpointCreate mocks base method.\nfunc (m *MockAPIClient) CheckpointCreate(arg0 context.Context, arg1 string, arg2 client.CheckpointCreateOptions) (client.CheckpointCreateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CheckpointCreate\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.CheckpointCreateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// CheckpointCreate indicates an expected call of CheckpointCreate.\nfunc (mr *MockAPIClientMockRecorder) CheckpointCreate(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CheckpointCreate\", reflect.TypeOf((*MockAPIClient)(nil).CheckpointCreate), arg0, arg1, arg2)\n}\n\n// CheckpointList mocks base method.\nfunc (m *MockAPIClient) CheckpointList(arg0 context.Context, arg1 string, arg2 client.CheckpointListOptions) (client.CheckpointListResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CheckpointList\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.CheckpointListResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// CheckpointList indicates an expected call of CheckpointList.\nfunc (mr *MockAPIClientMockRecorder) CheckpointList(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CheckpointList\", reflect.TypeOf((*MockAPIClient)(nil).CheckpointList), arg0, arg1, arg2)\n}\n\n// CheckpointRemove mocks base method.\nfunc (m *MockAPIClient) CheckpointRemove(arg0 context.Context, arg1 string, arg2 client.CheckpointRemoveOptions) (client.CheckpointRemoveResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CheckpointRemove\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.CheckpointRemoveResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// CheckpointRemove indicates an expected call of CheckpointRemove.\nfunc (mr *MockAPIClientMockRecorder) CheckpointRemove(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CheckpointRemove\", reflect.TypeOf((*MockAPIClient)(nil).CheckpointRemove), arg0, arg1, arg2)\n}\n\n// ClientVersion mocks base method.\nfunc (m *MockAPIClient) ClientVersion() string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ClientVersion\")\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// ClientVersion indicates an expected call of ClientVersion.\nfunc (mr *MockAPIClientMockRecorder) ClientVersion() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ClientVersion\", reflect.TypeOf((*MockAPIClient)(nil).ClientVersion))\n}\n\n// Close mocks base method.\nfunc (m *MockAPIClient) Close() error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Close\")\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Close indicates an expected call of Close.\nfunc (mr *MockAPIClientMockRecorder) Close() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Close\", reflect.TypeOf((*MockAPIClient)(nil).Close))\n}\n\n// ConfigCreate mocks base method.\nfunc (m *MockAPIClient) ConfigCreate(arg0 context.Context, arg1 client.ConfigCreateOptions) (client.ConfigCreateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigCreate\", arg0, arg1)\n\tret0, _ := ret[0].(client.ConfigCreateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ConfigCreate indicates an expected call of ConfigCreate.\nfunc (mr *MockAPIClientMockRecorder) ConfigCreate(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigCreate\", reflect.TypeOf((*MockAPIClient)(nil).ConfigCreate), arg0, arg1)\n}\n\n// ConfigInspect mocks base method.\nfunc (m *MockAPIClient) ConfigInspect(arg0 context.Context, arg1 string, arg2 client.ConfigInspectOptions) (client.ConfigInspectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigInspect\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ConfigInspectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ConfigInspect indicates an expected call of ConfigInspect.\nfunc (mr *MockAPIClientMockRecorder) ConfigInspect(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigInspect\", reflect.TypeOf((*MockAPIClient)(nil).ConfigInspect), arg0, arg1, arg2)\n}\n\n// ConfigList mocks base method.\nfunc (m *MockAPIClient) ConfigList(arg0 context.Context, arg1 client.ConfigListOptions) (client.ConfigListResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigList\", arg0, arg1)\n\tret0, _ := ret[0].(client.ConfigListResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ConfigList indicates an expected call of ConfigList.\nfunc (mr *MockAPIClientMockRecorder) ConfigList(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigList\", reflect.TypeOf((*MockAPIClient)(nil).ConfigList), arg0, arg1)\n}\n\n// ConfigRemove mocks base method.\nfunc (m *MockAPIClient) ConfigRemove(arg0 context.Context, arg1 string, arg2 client.ConfigRemoveOptions) (client.ConfigRemoveResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigRemove\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ConfigRemoveResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ConfigRemove indicates an expected call of ConfigRemove.\nfunc (mr *MockAPIClientMockRecorder) ConfigRemove(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigRemove\", reflect.TypeOf((*MockAPIClient)(nil).ConfigRemove), arg0, arg1, arg2)\n}\n\n// ConfigUpdate mocks base method.\nfunc (m *MockAPIClient) ConfigUpdate(arg0 context.Context, arg1 string, arg2 client.ConfigUpdateOptions) (client.ConfigUpdateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigUpdate\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ConfigUpdateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ConfigUpdate indicates an expected call of ConfigUpdate.\nfunc (mr *MockAPIClientMockRecorder) ConfigUpdate(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigUpdate\", reflect.TypeOf((*MockAPIClient)(nil).ConfigUpdate), arg0, arg1, arg2)\n}\n\n// ContainerAttach mocks base method.\nfunc (m *MockAPIClient) ContainerAttach(arg0 context.Context, arg1 string, arg2 client.ContainerAttachOptions) (client.ContainerAttachResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerAttach\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerAttachResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerAttach indicates an expected call of ContainerAttach.\nfunc (mr *MockAPIClientMockRecorder) ContainerAttach(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerAttach\", reflect.TypeOf((*MockAPIClient)(nil).ContainerAttach), arg0, arg1, arg2)\n}\n\n// ContainerCommit mocks base method.\nfunc (m *MockAPIClient) ContainerCommit(arg0 context.Context, arg1 string, arg2 client.ContainerCommitOptions) (client.ContainerCommitResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerCommit\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerCommitResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerCommit indicates an expected call of ContainerCommit.\nfunc (mr *MockAPIClientMockRecorder) ContainerCommit(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerCommit\", reflect.TypeOf((*MockAPIClient)(nil).ContainerCommit), arg0, arg1, arg2)\n}\n\n// ContainerCreate mocks base method.\nfunc (m *MockAPIClient) ContainerCreate(arg0 context.Context, arg1 client.ContainerCreateOptions) (client.ContainerCreateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerCreate\", arg0, arg1)\n\tret0, _ := ret[0].(client.ContainerCreateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerCreate indicates an expected call of ContainerCreate.\nfunc (mr *MockAPIClientMockRecorder) ContainerCreate(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerCreate\", reflect.TypeOf((*MockAPIClient)(nil).ContainerCreate), arg0, arg1)\n}\n\n// ContainerDiff mocks base method.\nfunc (m *MockAPIClient) ContainerDiff(arg0 context.Context, arg1 string, arg2 client.ContainerDiffOptions) (client.ContainerDiffResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerDiff\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerDiffResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerDiff indicates an expected call of ContainerDiff.\nfunc (mr *MockAPIClientMockRecorder) ContainerDiff(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerDiff\", reflect.TypeOf((*MockAPIClient)(nil).ContainerDiff), arg0, arg1, arg2)\n}\n\n// ContainerExport mocks base method.\nfunc (m *MockAPIClient) ContainerExport(arg0 context.Context, arg1 string, arg2 client.ContainerExportOptions) (client.ContainerExportResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerExport\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerExportResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerExport indicates an expected call of ContainerExport.\nfunc (mr *MockAPIClientMockRecorder) ContainerExport(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerExport\", reflect.TypeOf((*MockAPIClient)(nil).ContainerExport), arg0, arg1, arg2)\n}\n\n// ContainerInspect mocks base method.\nfunc (m *MockAPIClient) ContainerInspect(arg0 context.Context, arg1 string, arg2 client.ContainerInspectOptions) (client.ContainerInspectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerInspect\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerInspectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerInspect indicates an expected call of ContainerInspect.\nfunc (mr *MockAPIClientMockRecorder) ContainerInspect(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerInspect\", reflect.TypeOf((*MockAPIClient)(nil).ContainerInspect), arg0, arg1, arg2)\n}\n\n// ContainerKill mocks base method.\nfunc (m *MockAPIClient) ContainerKill(arg0 context.Context, arg1 string, arg2 client.ContainerKillOptions) (client.ContainerKillResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerKill\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerKillResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerKill indicates an expected call of ContainerKill.\nfunc (mr *MockAPIClientMockRecorder) ContainerKill(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerKill\", reflect.TypeOf((*MockAPIClient)(nil).ContainerKill), arg0, arg1, arg2)\n}\n\n// ContainerList mocks base method.\nfunc (m *MockAPIClient) ContainerList(arg0 context.Context, arg1 client.ContainerListOptions) (client.ContainerListResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerList\", arg0, arg1)\n\tret0, _ := ret[0].(client.ContainerListResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerList indicates an expected call of ContainerList.\nfunc (mr *MockAPIClientMockRecorder) ContainerList(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerList\", reflect.TypeOf((*MockAPIClient)(nil).ContainerList), arg0, arg1)\n}\n\n// ContainerLogs mocks base method.\nfunc (m *MockAPIClient) ContainerLogs(arg0 context.Context, arg1 string, arg2 client.ContainerLogsOptions) (client.ContainerLogsResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerLogs\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerLogsResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerLogs indicates an expected call of ContainerLogs.\nfunc (mr *MockAPIClientMockRecorder) ContainerLogs(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerLogs\", reflect.TypeOf((*MockAPIClient)(nil).ContainerLogs), arg0, arg1, arg2)\n}\n\n// ContainerPause mocks base method.\nfunc (m *MockAPIClient) ContainerPause(arg0 context.Context, arg1 string, arg2 client.ContainerPauseOptions) (client.ContainerPauseResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerPause\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerPauseResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerPause indicates an expected call of ContainerPause.\nfunc (mr *MockAPIClientMockRecorder) ContainerPause(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerPause\", reflect.TypeOf((*MockAPIClient)(nil).ContainerPause), arg0, arg1, arg2)\n}\n\n// ContainerPrune mocks base method.\nfunc (m *MockAPIClient) ContainerPrune(arg0 context.Context, arg1 client.ContainerPruneOptions) (client.ContainerPruneResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerPrune\", arg0, arg1)\n\tret0, _ := ret[0].(client.ContainerPruneResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerPrune indicates an expected call of ContainerPrune.\nfunc (mr *MockAPIClientMockRecorder) ContainerPrune(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerPrune\", reflect.TypeOf((*MockAPIClient)(nil).ContainerPrune), arg0, arg1)\n}\n\n// ContainerRemove mocks base method.\nfunc (m *MockAPIClient) ContainerRemove(arg0 context.Context, arg1 string, arg2 client.ContainerRemoveOptions) (client.ContainerRemoveResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerRemove\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerRemoveResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerRemove indicates an expected call of ContainerRemove.\nfunc (mr *MockAPIClientMockRecorder) ContainerRemove(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerRemove\", reflect.TypeOf((*MockAPIClient)(nil).ContainerRemove), arg0, arg1, arg2)\n}\n\n// ContainerRename mocks base method.\nfunc (m *MockAPIClient) ContainerRename(arg0 context.Context, arg1 string, arg2 client.ContainerRenameOptions) (client.ContainerRenameResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerRename\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerRenameResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerRename indicates an expected call of ContainerRename.\nfunc (mr *MockAPIClientMockRecorder) ContainerRename(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerRename\", reflect.TypeOf((*MockAPIClient)(nil).ContainerRename), arg0, arg1, arg2)\n}\n\n// ContainerResize mocks base method.\nfunc (m *MockAPIClient) ContainerResize(arg0 context.Context, arg1 string, arg2 client.ContainerResizeOptions) (client.ContainerResizeResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerResize\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerResizeResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerResize indicates an expected call of ContainerResize.\nfunc (mr *MockAPIClientMockRecorder) ContainerResize(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerResize\", reflect.TypeOf((*MockAPIClient)(nil).ContainerResize), arg0, arg1, arg2)\n}\n\n// ContainerRestart mocks base method.\nfunc (m *MockAPIClient) ContainerRestart(arg0 context.Context, arg1 string, arg2 client.ContainerRestartOptions) (client.ContainerRestartResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerRestart\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerRestartResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerRestart indicates an expected call of ContainerRestart.\nfunc (mr *MockAPIClientMockRecorder) ContainerRestart(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerRestart\", reflect.TypeOf((*MockAPIClient)(nil).ContainerRestart), arg0, arg1, arg2)\n}\n\n// ContainerStart mocks base method.\nfunc (m *MockAPIClient) ContainerStart(arg0 context.Context, arg1 string, arg2 client.ContainerStartOptions) (client.ContainerStartResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStart\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerStartResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerStart indicates an expected call of ContainerStart.\nfunc (mr *MockAPIClientMockRecorder) ContainerStart(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStart\", reflect.TypeOf((*MockAPIClient)(nil).ContainerStart), arg0, arg1, arg2)\n}\n\n// ContainerStatPath mocks base method.\nfunc (m *MockAPIClient) ContainerStatPath(arg0 context.Context, arg1 string, arg2 client.ContainerStatPathOptions) (client.ContainerStatPathResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStatPath\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerStatPathResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerStatPath indicates an expected call of ContainerStatPath.\nfunc (mr *MockAPIClientMockRecorder) ContainerStatPath(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStatPath\", reflect.TypeOf((*MockAPIClient)(nil).ContainerStatPath), arg0, arg1, arg2)\n}\n\n// ContainerStats mocks base method.\nfunc (m *MockAPIClient) ContainerStats(arg0 context.Context, arg1 string, arg2 client.ContainerStatsOptions) (client.ContainerStatsResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStats\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerStatsResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerStats indicates an expected call of ContainerStats.\nfunc (mr *MockAPIClientMockRecorder) ContainerStats(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStats\", reflect.TypeOf((*MockAPIClient)(nil).ContainerStats), arg0, arg1, arg2)\n}\n\n// ContainerStop mocks base method.\nfunc (m *MockAPIClient) ContainerStop(arg0 context.Context, arg1 string, arg2 client.ContainerStopOptions) (client.ContainerStopResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStop\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerStopResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerStop indicates an expected call of ContainerStop.\nfunc (mr *MockAPIClientMockRecorder) ContainerStop(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStop\", reflect.TypeOf((*MockAPIClient)(nil).ContainerStop), arg0, arg1, arg2)\n}\n\n// ContainerTop mocks base method.\nfunc (m *MockAPIClient) ContainerTop(arg0 context.Context, arg1 string, arg2 client.ContainerTopOptions) (client.ContainerTopResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerTop\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerTopResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerTop indicates an expected call of ContainerTop.\nfunc (mr *MockAPIClientMockRecorder) ContainerTop(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerTop\", reflect.TypeOf((*MockAPIClient)(nil).ContainerTop), arg0, arg1, arg2)\n}\n\n// ContainerUnpause mocks base method.\nfunc (m *MockAPIClient) ContainerUnpause(arg0 context.Context, arg1 string, arg2 client.ContainerUnpauseOptions) (client.ContainerUnpauseResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerUnpause\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerUnpauseResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerUnpause indicates an expected call of ContainerUnpause.\nfunc (mr *MockAPIClientMockRecorder) ContainerUnpause(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerUnpause\", reflect.TypeOf((*MockAPIClient)(nil).ContainerUnpause), arg0, arg1, arg2)\n}\n\n// ContainerUpdate mocks base method.\nfunc (m *MockAPIClient) ContainerUpdate(arg0 context.Context, arg1 string, arg2 client.ContainerUpdateOptions) (client.ContainerUpdateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerUpdate\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerUpdateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerUpdate indicates an expected call of ContainerUpdate.\nfunc (mr *MockAPIClientMockRecorder) ContainerUpdate(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerUpdate\", reflect.TypeOf((*MockAPIClient)(nil).ContainerUpdate), arg0, arg1, arg2)\n}\n\n// ContainerWait mocks base method.\nfunc (m *MockAPIClient) ContainerWait(arg0 context.Context, arg1 string, arg2 client.ContainerWaitOptions) client.ContainerWaitResult {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerWait\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ContainerWaitResult)\n\treturn ret0\n}\n\n// ContainerWait indicates an expected call of ContainerWait.\nfunc (mr *MockAPIClientMockRecorder) ContainerWait(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerWait\", reflect.TypeOf((*MockAPIClient)(nil).ContainerWait), arg0, arg1, arg2)\n}\n\n// CopyFromContainer mocks base method.\nfunc (m *MockAPIClient) CopyFromContainer(arg0 context.Context, arg1 string, arg2 client.CopyFromContainerOptions) (client.CopyFromContainerResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CopyFromContainer\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.CopyFromContainerResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// CopyFromContainer indicates an expected call of CopyFromContainer.\nfunc (mr *MockAPIClientMockRecorder) CopyFromContainer(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CopyFromContainer\", reflect.TypeOf((*MockAPIClient)(nil).CopyFromContainer), arg0, arg1, arg2)\n}\n\n// CopyToContainer mocks base method.\nfunc (m *MockAPIClient) CopyToContainer(arg0 context.Context, arg1 string, arg2 client.CopyToContainerOptions) (client.CopyToContainerResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CopyToContainer\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.CopyToContainerResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// CopyToContainer indicates an expected call of CopyToContainer.\nfunc (mr *MockAPIClientMockRecorder) CopyToContainer(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CopyToContainer\", reflect.TypeOf((*MockAPIClient)(nil).CopyToContainer), arg0, arg1, arg2)\n}\n\n// DaemonHost mocks base method.\nfunc (m *MockAPIClient) DaemonHost() string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DaemonHost\")\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// DaemonHost indicates an expected call of DaemonHost.\nfunc (mr *MockAPIClientMockRecorder) DaemonHost() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DaemonHost\", reflect.TypeOf((*MockAPIClient)(nil).DaemonHost))\n}\n\n// DialHijack mocks base method.\nfunc (m *MockAPIClient) DialHijack(arg0 context.Context, arg1, arg2 string, arg3 map[string][]string) (net.Conn, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DialHijack\", arg0, arg1, arg2, arg3)\n\tret0, _ := ret[0].(net.Conn)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// DialHijack indicates an expected call of DialHijack.\nfunc (mr *MockAPIClientMockRecorder) DialHijack(arg0, arg1, arg2, arg3 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DialHijack\", reflect.TypeOf((*MockAPIClient)(nil).DialHijack), arg0, arg1, arg2, arg3)\n}\n\n// Dialer mocks base method.\nfunc (m *MockAPIClient) Dialer() func(context.Context) (net.Conn, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Dialer\")\n\tret0, _ := ret[0].(func(context.Context) (net.Conn, error))\n\treturn ret0\n}\n\n// Dialer indicates an expected call of Dialer.\nfunc (mr *MockAPIClientMockRecorder) Dialer() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Dialer\", reflect.TypeOf((*MockAPIClient)(nil).Dialer))\n}\n\n// DiskUsage mocks base method.\nfunc (m *MockAPIClient) DiskUsage(arg0 context.Context, arg1 client.DiskUsageOptions) (client.DiskUsageResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DiskUsage\", arg0, arg1)\n\tret0, _ := ret[0].(client.DiskUsageResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// DiskUsage indicates an expected call of DiskUsage.\nfunc (mr *MockAPIClientMockRecorder) DiskUsage(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DiskUsage\", reflect.TypeOf((*MockAPIClient)(nil).DiskUsage), arg0, arg1)\n}\n\n// DistributionInspect mocks base method.\nfunc (m *MockAPIClient) DistributionInspect(arg0 context.Context, arg1 string, arg2 client.DistributionInspectOptions) (client.DistributionInspectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DistributionInspect\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.DistributionInspectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// DistributionInspect indicates an expected call of DistributionInspect.\nfunc (mr *MockAPIClientMockRecorder) DistributionInspect(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DistributionInspect\", reflect.TypeOf((*MockAPIClient)(nil).DistributionInspect), arg0, arg1, arg2)\n}\n\n// Events mocks base method.\nfunc (m *MockAPIClient) Events(arg0 context.Context, arg1 client.EventsListOptions) client.EventsResult {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Events\", arg0, arg1)\n\tret0, _ := ret[0].(client.EventsResult)\n\treturn ret0\n}\n\n// Events indicates an expected call of Events.\nfunc (mr *MockAPIClientMockRecorder) Events(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Events\", reflect.TypeOf((*MockAPIClient)(nil).Events), arg0, arg1)\n}\n\n// ExecAttach mocks base method.\nfunc (m *MockAPIClient) ExecAttach(arg0 context.Context, arg1 string, arg2 client.ExecAttachOptions) (client.ExecAttachResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ExecAttach\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ExecAttachResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ExecAttach indicates an expected call of ExecAttach.\nfunc (mr *MockAPIClientMockRecorder) ExecAttach(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ExecAttach\", reflect.TypeOf((*MockAPIClient)(nil).ExecAttach), arg0, arg1, arg2)\n}\n\n// ExecCreate mocks base method.\nfunc (m *MockAPIClient) ExecCreate(arg0 context.Context, arg1 string, arg2 client.ExecCreateOptions) (client.ExecCreateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ExecCreate\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ExecCreateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ExecCreate indicates an expected call of ExecCreate.\nfunc (mr *MockAPIClientMockRecorder) ExecCreate(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ExecCreate\", reflect.TypeOf((*MockAPIClient)(nil).ExecCreate), arg0, arg1, arg2)\n}\n\n// ExecInspect mocks base method.\nfunc (m *MockAPIClient) ExecInspect(arg0 context.Context, arg1 string, arg2 client.ExecInspectOptions) (client.ExecInspectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ExecInspect\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ExecInspectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ExecInspect indicates an expected call of ExecInspect.\nfunc (mr *MockAPIClientMockRecorder) ExecInspect(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ExecInspect\", reflect.TypeOf((*MockAPIClient)(nil).ExecInspect), arg0, arg1, arg2)\n}\n\n// ExecResize mocks base method.\nfunc (m *MockAPIClient) ExecResize(arg0 context.Context, arg1 string, arg2 client.ExecResizeOptions) (client.ExecResizeResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ExecResize\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ExecResizeResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ExecResize indicates an expected call of ExecResize.\nfunc (mr *MockAPIClientMockRecorder) ExecResize(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ExecResize\", reflect.TypeOf((*MockAPIClient)(nil).ExecResize), arg0, arg1, arg2)\n}\n\n// ExecStart mocks base method.\nfunc (m *MockAPIClient) ExecStart(arg0 context.Context, arg1 string, arg2 client.ExecStartOptions) (client.ExecStartResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ExecStart\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ExecStartResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ExecStart indicates an expected call of ExecStart.\nfunc (mr *MockAPIClientMockRecorder) ExecStart(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ExecStart\", reflect.TypeOf((*MockAPIClient)(nil).ExecStart), arg0, arg1, arg2)\n}\n\n// ImageBuild mocks base method.\nfunc (m *MockAPIClient) ImageBuild(arg0 context.Context, arg1 io.Reader, arg2 client.ImageBuildOptions) (client.ImageBuildResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageBuild\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ImageBuildResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageBuild indicates an expected call of ImageBuild.\nfunc (mr *MockAPIClientMockRecorder) ImageBuild(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageBuild\", reflect.TypeOf((*MockAPIClient)(nil).ImageBuild), arg0, arg1, arg2)\n}\n\n// ImageHistory mocks base method.\nfunc (m *MockAPIClient) ImageHistory(arg0 context.Context, arg1 string, arg2 ...client.ImageHistoryOption) (client.ImageHistoryResult, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []any{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"ImageHistory\", varargs...)\n\tret0, _ := ret[0].(client.ImageHistoryResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageHistory indicates an expected call of ImageHistory.\nfunc (mr *MockAPIClientMockRecorder) ImageHistory(arg0, arg1 any, arg2 ...any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]any{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageHistory\", reflect.TypeOf((*MockAPIClient)(nil).ImageHistory), varargs...)\n}\n\n// ImageImport mocks base method.\nfunc (m *MockAPIClient) ImageImport(arg0 context.Context, arg1 client.ImageImportSource, arg2 string, arg3 client.ImageImportOptions) (client.ImageImportResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageImport\", arg0, arg1, arg2, arg3)\n\tret0, _ := ret[0].(client.ImageImportResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageImport indicates an expected call of ImageImport.\nfunc (mr *MockAPIClientMockRecorder) ImageImport(arg0, arg1, arg2, arg3 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageImport\", reflect.TypeOf((*MockAPIClient)(nil).ImageImport), arg0, arg1, arg2, arg3)\n}\n\n// ImageInspect mocks base method.\nfunc (m *MockAPIClient) ImageInspect(arg0 context.Context, arg1 string, arg2 ...client.ImageInspectOption) (client.ImageInspectResult, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []any{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"ImageInspect\", varargs...)\n\tret0, _ := ret[0].(client.ImageInspectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageInspect indicates an expected call of ImageInspect.\nfunc (mr *MockAPIClientMockRecorder) ImageInspect(arg0, arg1 any, arg2 ...any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]any{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageInspect\", reflect.TypeOf((*MockAPIClient)(nil).ImageInspect), varargs...)\n}\n\n// ImageList mocks base method.\nfunc (m *MockAPIClient) ImageList(arg0 context.Context, arg1 client.ImageListOptions) (client.ImageListResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageList\", arg0, arg1)\n\tret0, _ := ret[0].(client.ImageListResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageList indicates an expected call of ImageList.\nfunc (mr *MockAPIClientMockRecorder) ImageList(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageList\", reflect.TypeOf((*MockAPIClient)(nil).ImageList), arg0, arg1)\n}\n\n// ImageLoad mocks base method.\nfunc (m *MockAPIClient) ImageLoad(arg0 context.Context, arg1 io.Reader, arg2 ...client.ImageLoadOption) (client.ImageLoadResult, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []any{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"ImageLoad\", varargs...)\n\tret0, _ := ret[0].(client.ImageLoadResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageLoad indicates an expected call of ImageLoad.\nfunc (mr *MockAPIClientMockRecorder) ImageLoad(arg0, arg1 any, arg2 ...any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]any{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageLoad\", reflect.TypeOf((*MockAPIClient)(nil).ImageLoad), varargs...)\n}\n\n// ImagePrune mocks base method.\nfunc (m *MockAPIClient) ImagePrune(arg0 context.Context, arg1 client.ImagePruneOptions) (client.ImagePruneResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImagePrune\", arg0, arg1)\n\tret0, _ := ret[0].(client.ImagePruneResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImagePrune indicates an expected call of ImagePrune.\nfunc (mr *MockAPIClientMockRecorder) ImagePrune(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImagePrune\", reflect.TypeOf((*MockAPIClient)(nil).ImagePrune), arg0, arg1)\n}\n\n// ImagePull mocks base method.\nfunc (m *MockAPIClient) ImagePull(arg0 context.Context, arg1 string, arg2 client.ImagePullOptions) (client.ImagePullResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImagePull\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ImagePullResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImagePull indicates an expected call of ImagePull.\nfunc (mr *MockAPIClientMockRecorder) ImagePull(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImagePull\", reflect.TypeOf((*MockAPIClient)(nil).ImagePull), arg0, arg1, arg2)\n}\n\n// ImagePush mocks base method.\nfunc (m *MockAPIClient) ImagePush(arg0 context.Context, arg1 string, arg2 client.ImagePushOptions) (client.ImagePushResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImagePush\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ImagePushResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImagePush indicates an expected call of ImagePush.\nfunc (mr *MockAPIClientMockRecorder) ImagePush(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImagePush\", reflect.TypeOf((*MockAPIClient)(nil).ImagePush), arg0, arg1, arg2)\n}\n\n// ImageRemove mocks base method.\nfunc (m *MockAPIClient) ImageRemove(arg0 context.Context, arg1 string, arg2 client.ImageRemoveOptions) (client.ImageRemoveResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageRemove\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ImageRemoveResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageRemove indicates an expected call of ImageRemove.\nfunc (mr *MockAPIClientMockRecorder) ImageRemove(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageRemove\", reflect.TypeOf((*MockAPIClient)(nil).ImageRemove), arg0, arg1, arg2)\n}\n\n// ImageSave mocks base method.\nfunc (m *MockAPIClient) ImageSave(arg0 context.Context, arg1 []string, arg2 ...client.ImageSaveOption) (client.ImageSaveResult, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []any{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"ImageSave\", varargs...)\n\tret0, _ := ret[0].(client.ImageSaveResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageSave indicates an expected call of ImageSave.\nfunc (mr *MockAPIClientMockRecorder) ImageSave(arg0, arg1 any, arg2 ...any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]any{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageSave\", reflect.TypeOf((*MockAPIClient)(nil).ImageSave), varargs...)\n}\n\n// ImageSearch mocks base method.\nfunc (m *MockAPIClient) ImageSearch(arg0 context.Context, arg1 string, arg2 client.ImageSearchOptions) (client.ImageSearchResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageSearch\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ImageSearchResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageSearch indicates an expected call of ImageSearch.\nfunc (mr *MockAPIClientMockRecorder) ImageSearch(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageSearch\", reflect.TypeOf((*MockAPIClient)(nil).ImageSearch), arg0, arg1, arg2)\n}\n\n// ImageTag mocks base method.\nfunc (m *MockAPIClient) ImageTag(arg0 context.Context, arg1 client.ImageTagOptions) (client.ImageTagResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageTag\", arg0, arg1)\n\tret0, _ := ret[0].(client.ImageTagResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageTag indicates an expected call of ImageTag.\nfunc (mr *MockAPIClientMockRecorder) ImageTag(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageTag\", reflect.TypeOf((*MockAPIClient)(nil).ImageTag), arg0, arg1)\n}\n\n// Info mocks base method.\nfunc (m *MockAPIClient) Info(arg0 context.Context, arg1 client.InfoOptions) (client.SystemInfoResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Info\", arg0, arg1)\n\tret0, _ := ret[0].(client.SystemInfoResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Info indicates an expected call of Info.\nfunc (mr *MockAPIClientMockRecorder) Info(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Info\", reflect.TypeOf((*MockAPIClient)(nil).Info), arg0, arg1)\n}\n\n// NetworkConnect mocks base method.\nfunc (m *MockAPIClient) NetworkConnect(arg0 context.Context, arg1 string, arg2 client.NetworkConnectOptions) (client.NetworkConnectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkConnect\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.NetworkConnectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworkConnect indicates an expected call of NetworkConnect.\nfunc (mr *MockAPIClientMockRecorder) NetworkConnect(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkConnect\", reflect.TypeOf((*MockAPIClient)(nil).NetworkConnect), arg0, arg1, arg2)\n}\n\n// NetworkCreate mocks base method.\nfunc (m *MockAPIClient) NetworkCreate(arg0 context.Context, arg1 string, arg2 client.NetworkCreateOptions) (client.NetworkCreateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkCreate\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.NetworkCreateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworkCreate indicates an expected call of NetworkCreate.\nfunc (mr *MockAPIClientMockRecorder) NetworkCreate(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkCreate\", reflect.TypeOf((*MockAPIClient)(nil).NetworkCreate), arg0, arg1, arg2)\n}\n\n// NetworkDisconnect mocks base method.\nfunc (m *MockAPIClient) NetworkDisconnect(arg0 context.Context, arg1 string, arg2 client.NetworkDisconnectOptions) (client.NetworkDisconnectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkDisconnect\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.NetworkDisconnectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworkDisconnect indicates an expected call of NetworkDisconnect.\nfunc (mr *MockAPIClientMockRecorder) NetworkDisconnect(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkDisconnect\", reflect.TypeOf((*MockAPIClient)(nil).NetworkDisconnect), arg0, arg1, arg2)\n}\n\n// NetworkInspect mocks base method.\nfunc (m *MockAPIClient) NetworkInspect(arg0 context.Context, arg1 string, arg2 client.NetworkInspectOptions) (client.NetworkInspectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkInspect\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.NetworkInspectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworkInspect indicates an expected call of NetworkInspect.\nfunc (mr *MockAPIClientMockRecorder) NetworkInspect(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkInspect\", reflect.TypeOf((*MockAPIClient)(nil).NetworkInspect), arg0, arg1, arg2)\n}\n\n// NetworkList mocks base method.\nfunc (m *MockAPIClient) NetworkList(arg0 context.Context, arg1 client.NetworkListOptions) (client.NetworkListResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkList\", arg0, arg1)\n\tret0, _ := ret[0].(client.NetworkListResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworkList indicates an expected call of NetworkList.\nfunc (mr *MockAPIClientMockRecorder) NetworkList(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkList\", reflect.TypeOf((*MockAPIClient)(nil).NetworkList), arg0, arg1)\n}\n\n// NetworkPrune mocks base method.\nfunc (m *MockAPIClient) NetworkPrune(arg0 context.Context, arg1 client.NetworkPruneOptions) (client.NetworkPruneResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkPrune\", arg0, arg1)\n\tret0, _ := ret[0].(client.NetworkPruneResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworkPrune indicates an expected call of NetworkPrune.\nfunc (mr *MockAPIClientMockRecorder) NetworkPrune(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkPrune\", reflect.TypeOf((*MockAPIClient)(nil).NetworkPrune), arg0, arg1)\n}\n\n// NetworkRemove mocks base method.\nfunc (m *MockAPIClient) NetworkRemove(arg0 context.Context, arg1 string, arg2 client.NetworkRemoveOptions) (client.NetworkRemoveResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkRemove\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.NetworkRemoveResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworkRemove indicates an expected call of NetworkRemove.\nfunc (mr *MockAPIClientMockRecorder) NetworkRemove(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkRemove\", reflect.TypeOf((*MockAPIClient)(nil).NetworkRemove), arg0, arg1, arg2)\n}\n\n// NodeInspect mocks base method.\nfunc (m *MockAPIClient) NodeInspect(arg0 context.Context, arg1 string, arg2 client.NodeInspectOptions) (client.NodeInspectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NodeInspect\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.NodeInspectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NodeInspect indicates an expected call of NodeInspect.\nfunc (mr *MockAPIClientMockRecorder) NodeInspect(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NodeInspect\", reflect.TypeOf((*MockAPIClient)(nil).NodeInspect), arg0, arg1, arg2)\n}\n\n// NodeList mocks base method.\nfunc (m *MockAPIClient) NodeList(arg0 context.Context, arg1 client.NodeListOptions) (client.NodeListResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NodeList\", arg0, arg1)\n\tret0, _ := ret[0].(client.NodeListResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NodeList indicates an expected call of NodeList.\nfunc (mr *MockAPIClientMockRecorder) NodeList(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NodeList\", reflect.TypeOf((*MockAPIClient)(nil).NodeList), arg0, arg1)\n}\n\n// NodeRemove mocks base method.\nfunc (m *MockAPIClient) NodeRemove(arg0 context.Context, arg1 string, arg2 client.NodeRemoveOptions) (client.NodeRemoveResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NodeRemove\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.NodeRemoveResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NodeRemove indicates an expected call of NodeRemove.\nfunc (mr *MockAPIClientMockRecorder) NodeRemove(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NodeRemove\", reflect.TypeOf((*MockAPIClient)(nil).NodeRemove), arg0, arg1, arg2)\n}\n\n// NodeUpdate mocks base method.\nfunc (m *MockAPIClient) NodeUpdate(arg0 context.Context, arg1 string, arg2 client.NodeUpdateOptions) (client.NodeUpdateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NodeUpdate\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.NodeUpdateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NodeUpdate indicates an expected call of NodeUpdate.\nfunc (mr *MockAPIClientMockRecorder) NodeUpdate(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NodeUpdate\", reflect.TypeOf((*MockAPIClient)(nil).NodeUpdate), arg0, arg1, arg2)\n}\n\n// Ping mocks base method.\nfunc (m *MockAPIClient) Ping(arg0 context.Context, arg1 client.PingOptions) (client.PingResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Ping\", arg0, arg1)\n\tret0, _ := ret[0].(client.PingResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Ping indicates an expected call of Ping.\nfunc (mr *MockAPIClientMockRecorder) Ping(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Ping\", reflect.TypeOf((*MockAPIClient)(nil).Ping), arg0, arg1)\n}\n\n// PluginCreate mocks base method.\nfunc (m *MockAPIClient) PluginCreate(arg0 context.Context, arg1 io.Reader, arg2 client.PluginCreateOptions) (client.PluginCreateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginCreate\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.PluginCreateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginCreate indicates an expected call of PluginCreate.\nfunc (mr *MockAPIClientMockRecorder) PluginCreate(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginCreate\", reflect.TypeOf((*MockAPIClient)(nil).PluginCreate), arg0, arg1, arg2)\n}\n\n// PluginDisable mocks base method.\nfunc (m *MockAPIClient) PluginDisable(arg0 context.Context, arg1 string, arg2 client.PluginDisableOptions) (client.PluginDisableResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginDisable\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.PluginDisableResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginDisable indicates an expected call of PluginDisable.\nfunc (mr *MockAPIClientMockRecorder) PluginDisable(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginDisable\", reflect.TypeOf((*MockAPIClient)(nil).PluginDisable), arg0, arg1, arg2)\n}\n\n// PluginEnable mocks base method.\nfunc (m *MockAPIClient) PluginEnable(arg0 context.Context, arg1 string, arg2 client.PluginEnableOptions) (client.PluginEnableResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginEnable\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.PluginEnableResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginEnable indicates an expected call of PluginEnable.\nfunc (mr *MockAPIClientMockRecorder) PluginEnable(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginEnable\", reflect.TypeOf((*MockAPIClient)(nil).PluginEnable), arg0, arg1, arg2)\n}\n\n// PluginInspect mocks base method.\nfunc (m *MockAPIClient) PluginInspect(arg0 context.Context, arg1 string, arg2 client.PluginInspectOptions) (client.PluginInspectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginInspect\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.PluginInspectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginInspect indicates an expected call of PluginInspect.\nfunc (mr *MockAPIClientMockRecorder) PluginInspect(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginInspect\", reflect.TypeOf((*MockAPIClient)(nil).PluginInspect), arg0, arg1, arg2)\n}\n\n// PluginInstall mocks base method.\nfunc (m *MockAPIClient) PluginInstall(arg0 context.Context, arg1 string, arg2 client.PluginInstallOptions) (client.PluginInstallResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginInstall\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.PluginInstallResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginInstall indicates an expected call of PluginInstall.\nfunc (mr *MockAPIClientMockRecorder) PluginInstall(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginInstall\", reflect.TypeOf((*MockAPIClient)(nil).PluginInstall), arg0, arg1, arg2)\n}\n\n// PluginList mocks base method.\nfunc (m *MockAPIClient) PluginList(arg0 context.Context, arg1 client.PluginListOptions) (client.PluginListResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginList\", arg0, arg1)\n\tret0, _ := ret[0].(client.PluginListResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginList indicates an expected call of PluginList.\nfunc (mr *MockAPIClientMockRecorder) PluginList(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginList\", reflect.TypeOf((*MockAPIClient)(nil).PluginList), arg0, arg1)\n}\n\n// PluginPush mocks base method.\nfunc (m *MockAPIClient) PluginPush(arg0 context.Context, arg1 string, arg2 client.PluginPushOptions) (client.PluginPushResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginPush\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.PluginPushResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginPush indicates an expected call of PluginPush.\nfunc (mr *MockAPIClientMockRecorder) PluginPush(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginPush\", reflect.TypeOf((*MockAPIClient)(nil).PluginPush), arg0, arg1, arg2)\n}\n\n// PluginRemove mocks base method.\nfunc (m *MockAPIClient) PluginRemove(arg0 context.Context, arg1 string, arg2 client.PluginRemoveOptions) (client.PluginRemoveResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginRemove\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.PluginRemoveResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginRemove indicates an expected call of PluginRemove.\nfunc (mr *MockAPIClientMockRecorder) PluginRemove(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginRemove\", reflect.TypeOf((*MockAPIClient)(nil).PluginRemove), arg0, arg1, arg2)\n}\n\n// PluginSet mocks base method.\nfunc (m *MockAPIClient) PluginSet(arg0 context.Context, arg1 string, arg2 client.PluginSetOptions) (client.PluginSetResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginSet\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.PluginSetResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginSet indicates an expected call of PluginSet.\nfunc (mr *MockAPIClientMockRecorder) PluginSet(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginSet\", reflect.TypeOf((*MockAPIClient)(nil).PluginSet), arg0, arg1, arg2)\n}\n\n// PluginUpgrade mocks base method.\nfunc (m *MockAPIClient) PluginUpgrade(arg0 context.Context, arg1 string, arg2 client.PluginUpgradeOptions) (client.PluginUpgradeResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginUpgrade\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.PluginUpgradeResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginUpgrade indicates an expected call of PluginUpgrade.\nfunc (mr *MockAPIClientMockRecorder) PluginUpgrade(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginUpgrade\", reflect.TypeOf((*MockAPIClient)(nil).PluginUpgrade), arg0, arg1, arg2)\n}\n\n// RegistryLogin mocks base method.\nfunc (m *MockAPIClient) RegistryLogin(arg0 context.Context, arg1 client.RegistryLoginOptions) (client.RegistryLoginResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"RegistryLogin\", arg0, arg1)\n\tret0, _ := ret[0].(client.RegistryLoginResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// RegistryLogin indicates an expected call of RegistryLogin.\nfunc (mr *MockAPIClientMockRecorder) RegistryLogin(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RegistryLogin\", reflect.TypeOf((*MockAPIClient)(nil).RegistryLogin), arg0, arg1)\n}\n\n// SecretCreate mocks base method.\nfunc (m *MockAPIClient) SecretCreate(arg0 context.Context, arg1 client.SecretCreateOptions) (client.SecretCreateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretCreate\", arg0, arg1)\n\tret0, _ := ret[0].(client.SecretCreateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SecretCreate indicates an expected call of SecretCreate.\nfunc (mr *MockAPIClientMockRecorder) SecretCreate(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretCreate\", reflect.TypeOf((*MockAPIClient)(nil).SecretCreate), arg0, arg1)\n}\n\n// SecretInspect mocks base method.\nfunc (m *MockAPIClient) SecretInspect(arg0 context.Context, arg1 string, arg2 client.SecretInspectOptions) (client.SecretInspectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretInspect\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.SecretInspectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SecretInspect indicates an expected call of SecretInspect.\nfunc (mr *MockAPIClientMockRecorder) SecretInspect(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretInspect\", reflect.TypeOf((*MockAPIClient)(nil).SecretInspect), arg0, arg1, arg2)\n}\n\n// SecretList mocks base method.\nfunc (m *MockAPIClient) SecretList(arg0 context.Context, arg1 client.SecretListOptions) (client.SecretListResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretList\", arg0, arg1)\n\tret0, _ := ret[0].(client.SecretListResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SecretList indicates an expected call of SecretList.\nfunc (mr *MockAPIClientMockRecorder) SecretList(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretList\", reflect.TypeOf((*MockAPIClient)(nil).SecretList), arg0, arg1)\n}\n\n// SecretRemove mocks base method.\nfunc (m *MockAPIClient) SecretRemove(arg0 context.Context, arg1 string, arg2 client.SecretRemoveOptions) (client.SecretRemoveResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretRemove\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.SecretRemoveResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SecretRemove indicates an expected call of SecretRemove.\nfunc (mr *MockAPIClientMockRecorder) SecretRemove(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretRemove\", reflect.TypeOf((*MockAPIClient)(nil).SecretRemove), arg0, arg1, arg2)\n}\n\n// SecretUpdate mocks base method.\nfunc (m *MockAPIClient) SecretUpdate(arg0 context.Context, arg1 string, arg2 client.SecretUpdateOptions) (client.SecretUpdateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretUpdate\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.SecretUpdateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SecretUpdate indicates an expected call of SecretUpdate.\nfunc (mr *MockAPIClientMockRecorder) SecretUpdate(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretUpdate\", reflect.TypeOf((*MockAPIClient)(nil).SecretUpdate), arg0, arg1, arg2)\n}\n\n// ServerVersion mocks base method.\nfunc (m *MockAPIClient) ServerVersion(arg0 context.Context, arg1 client.ServerVersionOptions) (client.ServerVersionResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServerVersion\", arg0, arg1)\n\tret0, _ := ret[0].(client.ServerVersionResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServerVersion indicates an expected call of ServerVersion.\nfunc (mr *MockAPIClientMockRecorder) ServerVersion(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServerVersion\", reflect.TypeOf((*MockAPIClient)(nil).ServerVersion), arg0, arg1)\n}\n\n// ServiceCreate mocks base method.\nfunc (m *MockAPIClient) ServiceCreate(arg0 context.Context, arg1 client.ServiceCreateOptions) (client.ServiceCreateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceCreate\", arg0, arg1)\n\tret0, _ := ret[0].(client.ServiceCreateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceCreate indicates an expected call of ServiceCreate.\nfunc (mr *MockAPIClientMockRecorder) ServiceCreate(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceCreate\", reflect.TypeOf((*MockAPIClient)(nil).ServiceCreate), arg0, arg1)\n}\n\n// ServiceInspect mocks base method.\nfunc (m *MockAPIClient) ServiceInspect(arg0 context.Context, arg1 string, arg2 client.ServiceInspectOptions) (client.ServiceInspectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceInspect\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ServiceInspectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceInspect indicates an expected call of ServiceInspect.\nfunc (mr *MockAPIClientMockRecorder) ServiceInspect(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceInspect\", reflect.TypeOf((*MockAPIClient)(nil).ServiceInspect), arg0, arg1, arg2)\n}\n\n// ServiceList mocks base method.\nfunc (m *MockAPIClient) ServiceList(arg0 context.Context, arg1 client.ServiceListOptions) (client.ServiceListResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceList\", arg0, arg1)\n\tret0, _ := ret[0].(client.ServiceListResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceList indicates an expected call of ServiceList.\nfunc (mr *MockAPIClientMockRecorder) ServiceList(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceList\", reflect.TypeOf((*MockAPIClient)(nil).ServiceList), arg0, arg1)\n}\n\n// ServiceLogs mocks base method.\nfunc (m *MockAPIClient) ServiceLogs(arg0 context.Context, arg1 string, arg2 client.ServiceLogsOptions) (client.ServiceLogsResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceLogs\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ServiceLogsResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceLogs indicates an expected call of ServiceLogs.\nfunc (mr *MockAPIClientMockRecorder) ServiceLogs(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceLogs\", reflect.TypeOf((*MockAPIClient)(nil).ServiceLogs), arg0, arg1, arg2)\n}\n\n// ServiceRemove mocks base method.\nfunc (m *MockAPIClient) ServiceRemove(arg0 context.Context, arg1 string, arg2 client.ServiceRemoveOptions) (client.ServiceRemoveResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceRemove\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ServiceRemoveResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceRemove indicates an expected call of ServiceRemove.\nfunc (mr *MockAPIClientMockRecorder) ServiceRemove(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceRemove\", reflect.TypeOf((*MockAPIClient)(nil).ServiceRemove), arg0, arg1, arg2)\n}\n\n// ServiceUpdate mocks base method.\nfunc (m *MockAPIClient) ServiceUpdate(arg0 context.Context, arg1 string, arg2 client.ServiceUpdateOptions) (client.ServiceUpdateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceUpdate\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.ServiceUpdateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceUpdate indicates an expected call of ServiceUpdate.\nfunc (mr *MockAPIClientMockRecorder) ServiceUpdate(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceUpdate\", reflect.TypeOf((*MockAPIClient)(nil).ServiceUpdate), arg0, arg1, arg2)\n}\n\n// SwarmGetUnlockKey mocks base method.\nfunc (m *MockAPIClient) SwarmGetUnlockKey(arg0 context.Context) (client.SwarmGetUnlockKeyResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmGetUnlockKey\", arg0)\n\tret0, _ := ret[0].(client.SwarmGetUnlockKeyResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SwarmGetUnlockKey indicates an expected call of SwarmGetUnlockKey.\nfunc (mr *MockAPIClientMockRecorder) SwarmGetUnlockKey(arg0 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmGetUnlockKey\", reflect.TypeOf((*MockAPIClient)(nil).SwarmGetUnlockKey), arg0)\n}\n\n// SwarmInit mocks base method.\nfunc (m *MockAPIClient) SwarmInit(arg0 context.Context, arg1 client.SwarmInitOptions) (client.SwarmInitResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmInit\", arg0, arg1)\n\tret0, _ := ret[0].(client.SwarmInitResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SwarmInit indicates an expected call of SwarmInit.\nfunc (mr *MockAPIClientMockRecorder) SwarmInit(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmInit\", reflect.TypeOf((*MockAPIClient)(nil).SwarmInit), arg0, arg1)\n}\n\n// SwarmInspect mocks base method.\nfunc (m *MockAPIClient) SwarmInspect(arg0 context.Context, arg1 client.SwarmInspectOptions) (client.SwarmInspectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmInspect\", arg0, arg1)\n\tret0, _ := ret[0].(client.SwarmInspectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SwarmInspect indicates an expected call of SwarmInspect.\nfunc (mr *MockAPIClientMockRecorder) SwarmInspect(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmInspect\", reflect.TypeOf((*MockAPIClient)(nil).SwarmInspect), arg0, arg1)\n}\n\n// SwarmJoin mocks base method.\nfunc (m *MockAPIClient) SwarmJoin(arg0 context.Context, arg1 client.SwarmJoinOptions) (client.SwarmJoinResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmJoin\", arg0, arg1)\n\tret0, _ := ret[0].(client.SwarmJoinResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SwarmJoin indicates an expected call of SwarmJoin.\nfunc (mr *MockAPIClientMockRecorder) SwarmJoin(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmJoin\", reflect.TypeOf((*MockAPIClient)(nil).SwarmJoin), arg0, arg1)\n}\n\n// SwarmLeave mocks base method.\nfunc (m *MockAPIClient) SwarmLeave(arg0 context.Context, arg1 client.SwarmLeaveOptions) (client.SwarmLeaveResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmLeave\", arg0, arg1)\n\tret0, _ := ret[0].(client.SwarmLeaveResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SwarmLeave indicates an expected call of SwarmLeave.\nfunc (mr *MockAPIClientMockRecorder) SwarmLeave(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmLeave\", reflect.TypeOf((*MockAPIClient)(nil).SwarmLeave), arg0, arg1)\n}\n\n// SwarmUnlock mocks base method.\nfunc (m *MockAPIClient) SwarmUnlock(arg0 context.Context, arg1 client.SwarmUnlockOptions) (client.SwarmUnlockResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmUnlock\", arg0, arg1)\n\tret0, _ := ret[0].(client.SwarmUnlockResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SwarmUnlock indicates an expected call of SwarmUnlock.\nfunc (mr *MockAPIClientMockRecorder) SwarmUnlock(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmUnlock\", reflect.TypeOf((*MockAPIClient)(nil).SwarmUnlock), arg0, arg1)\n}\n\n// SwarmUpdate mocks base method.\nfunc (m *MockAPIClient) SwarmUpdate(arg0 context.Context, arg1 client.SwarmUpdateOptions) (client.SwarmUpdateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmUpdate\", arg0, arg1)\n\tret0, _ := ret[0].(client.SwarmUpdateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SwarmUpdate indicates an expected call of SwarmUpdate.\nfunc (mr *MockAPIClientMockRecorder) SwarmUpdate(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmUpdate\", reflect.TypeOf((*MockAPIClient)(nil).SwarmUpdate), arg0, arg1)\n}\n\n// TaskInspect mocks base method.\nfunc (m *MockAPIClient) TaskInspect(arg0 context.Context, arg1 string, arg2 client.TaskInspectOptions) (client.TaskInspectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"TaskInspect\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.TaskInspectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// TaskInspect indicates an expected call of TaskInspect.\nfunc (mr *MockAPIClientMockRecorder) TaskInspect(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"TaskInspect\", reflect.TypeOf((*MockAPIClient)(nil).TaskInspect), arg0, arg1, arg2)\n}\n\n// TaskList mocks base method.\nfunc (m *MockAPIClient) TaskList(arg0 context.Context, arg1 client.TaskListOptions) (client.TaskListResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"TaskList\", arg0, arg1)\n\tret0, _ := ret[0].(client.TaskListResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// TaskList indicates an expected call of TaskList.\nfunc (mr *MockAPIClientMockRecorder) TaskList(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"TaskList\", reflect.TypeOf((*MockAPIClient)(nil).TaskList), arg0, arg1)\n}\n\n// TaskLogs mocks base method.\nfunc (m *MockAPIClient) TaskLogs(arg0 context.Context, arg1 string, arg2 client.TaskLogsOptions) (client.TaskLogsResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"TaskLogs\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.TaskLogsResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// TaskLogs indicates an expected call of TaskLogs.\nfunc (mr *MockAPIClientMockRecorder) TaskLogs(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"TaskLogs\", reflect.TypeOf((*MockAPIClient)(nil).TaskLogs), arg0, arg1, arg2)\n}\n\n// VolumeCreate mocks base method.\nfunc (m *MockAPIClient) VolumeCreate(arg0 context.Context, arg1 client.VolumeCreateOptions) (client.VolumeCreateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeCreate\", arg0, arg1)\n\tret0, _ := ret[0].(client.VolumeCreateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumeCreate indicates an expected call of VolumeCreate.\nfunc (mr *MockAPIClientMockRecorder) VolumeCreate(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeCreate\", reflect.TypeOf((*MockAPIClient)(nil).VolumeCreate), arg0, arg1)\n}\n\n// VolumeInspect mocks base method.\nfunc (m *MockAPIClient) VolumeInspect(arg0 context.Context, arg1 string, arg2 client.VolumeInspectOptions) (client.VolumeInspectResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeInspect\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.VolumeInspectResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumeInspect indicates an expected call of VolumeInspect.\nfunc (mr *MockAPIClientMockRecorder) VolumeInspect(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeInspect\", reflect.TypeOf((*MockAPIClient)(nil).VolumeInspect), arg0, arg1, arg2)\n}\n\n// VolumeList mocks base method.\nfunc (m *MockAPIClient) VolumeList(arg0 context.Context, arg1 client.VolumeListOptions) (client.VolumeListResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeList\", arg0, arg1)\n\tret0, _ := ret[0].(client.VolumeListResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumeList indicates an expected call of VolumeList.\nfunc (mr *MockAPIClientMockRecorder) VolumeList(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeList\", reflect.TypeOf((*MockAPIClient)(nil).VolumeList), arg0, arg1)\n}\n\n// VolumePrune mocks base method.\nfunc (m *MockAPIClient) VolumePrune(arg0 context.Context, arg1 client.VolumePruneOptions) (client.VolumePruneResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumePrune\", arg0, arg1)\n\tret0, _ := ret[0].(client.VolumePruneResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumePrune indicates an expected call of VolumePrune.\nfunc (mr *MockAPIClientMockRecorder) VolumePrune(arg0, arg1 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumePrune\", reflect.TypeOf((*MockAPIClient)(nil).VolumePrune), arg0, arg1)\n}\n\n// VolumeRemove mocks base method.\nfunc (m *MockAPIClient) VolumeRemove(arg0 context.Context, arg1 string, arg2 client.VolumeRemoveOptions) (client.VolumeRemoveResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeRemove\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.VolumeRemoveResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumeRemove indicates an expected call of VolumeRemove.\nfunc (mr *MockAPIClientMockRecorder) VolumeRemove(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeRemove\", reflect.TypeOf((*MockAPIClient)(nil).VolumeRemove), arg0, arg1, arg2)\n}\n\n// VolumeUpdate mocks base method.\nfunc (m *MockAPIClient) VolumeUpdate(arg0 context.Context, arg1 string, arg2 client.VolumeUpdateOptions) (client.VolumeUpdateResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeUpdate\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(client.VolumeUpdateResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumeUpdate indicates an expected call of VolumeUpdate.\nfunc (mr *MockAPIClientMockRecorder) VolumeUpdate(arg0, arg1, arg2 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeUpdate\", reflect.TypeOf((*MockAPIClient)(nil).VolumeUpdate), arg0, arg1, arg2)\n}\n"
  },
  {
    "path": "pkg/mocks/mock_docker_cli.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: github.com/docker/cli/cli/command (interfaces: Cli)\n//\n// Generated by this command:\n//\n//\tmockgen -destination pkg/mocks/mock_docker_cli.go -package mocks github.com/docker/cli/cli/command Cli\n//\n\n// Package mocks is a generated GoMock package.\npackage mocks\n\nimport (\n\treflect \"reflect\"\n\n\tcommand \"github.com/docker/cli/cli/command\"\n\tconfigfile \"github.com/docker/cli/cli/config/configfile\"\n\tdocker \"github.com/docker/cli/cli/context/docker\"\n\tstore \"github.com/docker/cli/cli/context/store\"\n\tstreams \"github.com/docker/cli/cli/streams\"\n\tclient \"github.com/moby/moby/client\"\n\tmetric \"go.opentelemetry.io/otel/metric\"\n\tresource \"go.opentelemetry.io/otel/sdk/resource\"\n\ttrace \"go.opentelemetry.io/otel/trace\"\n\tgomock \"go.uber.org/mock/gomock\"\n)\n\n// MockCli is a mock of Cli interface.\ntype MockCli struct {\n\tctrl     *gomock.Controller\n\trecorder *MockCliMockRecorder\n}\n\n// MockCliMockRecorder is the mock recorder for MockCli.\ntype MockCliMockRecorder struct {\n\tmock *MockCli\n}\n\n// NewMockCli creates a new mock instance.\nfunc NewMockCli(ctrl *gomock.Controller) *MockCli {\n\tmock := &MockCli{ctrl: ctrl}\n\tmock.recorder = &MockCliMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use.\nfunc (m *MockCli) EXPECT() *MockCliMockRecorder {\n\treturn m.recorder\n}\n\n// BuildKitEnabled mocks base method.\nfunc (m *MockCli) BuildKitEnabled() (bool, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"BuildKitEnabled\")\n\tret0, _ := ret[0].(bool)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// BuildKitEnabled indicates an expected call of BuildKitEnabled.\nfunc (mr *MockCliMockRecorder) BuildKitEnabled() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"BuildKitEnabled\", reflect.TypeOf((*MockCli)(nil).BuildKitEnabled))\n}\n\n// Client mocks base method.\nfunc (m *MockCli) Client() client.APIClient {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Client\")\n\tret0, _ := ret[0].(client.APIClient)\n\treturn ret0\n}\n\n// Client indicates an expected call of Client.\nfunc (mr *MockCliMockRecorder) Client() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Client\", reflect.TypeOf((*MockCli)(nil).Client))\n}\n\n// ConfigFile mocks base method.\nfunc (m *MockCli) ConfigFile() *configfile.ConfigFile {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigFile\")\n\tret0, _ := ret[0].(*configfile.ConfigFile)\n\treturn ret0\n}\n\n// ConfigFile indicates an expected call of ConfigFile.\nfunc (mr *MockCliMockRecorder) ConfigFile() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigFile\", reflect.TypeOf((*MockCli)(nil).ConfigFile))\n}\n\n// ContextStore mocks base method.\nfunc (m *MockCli) ContextStore() store.Store {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContextStore\")\n\tret0, _ := ret[0].(store.Store)\n\treturn ret0\n}\n\n// ContextStore indicates an expected call of ContextStore.\nfunc (mr *MockCliMockRecorder) ContextStore() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContextStore\", reflect.TypeOf((*MockCli)(nil).ContextStore))\n}\n\n// CurrentContext mocks base method.\nfunc (m *MockCli) CurrentContext() string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CurrentContext\")\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// CurrentContext indicates an expected call of CurrentContext.\nfunc (mr *MockCliMockRecorder) CurrentContext() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CurrentContext\", reflect.TypeOf((*MockCli)(nil).CurrentContext))\n}\n\n// CurrentVersion mocks base method.\nfunc (m *MockCli) CurrentVersion() string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CurrentVersion\")\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// CurrentVersion indicates an expected call of CurrentVersion.\nfunc (mr *MockCliMockRecorder) CurrentVersion() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CurrentVersion\", reflect.TypeOf((*MockCli)(nil).CurrentVersion))\n}\n\n// DockerEndpoint mocks base method.\nfunc (m *MockCli) DockerEndpoint() docker.Endpoint {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DockerEndpoint\")\n\tret0, _ := ret[0].(docker.Endpoint)\n\treturn ret0\n}\n\n// DockerEndpoint indicates an expected call of DockerEndpoint.\nfunc (mr *MockCliMockRecorder) DockerEndpoint() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DockerEndpoint\", reflect.TypeOf((*MockCli)(nil).DockerEndpoint))\n}\n\n// Err mocks base method.\nfunc (m *MockCli) Err() *streams.Out {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Err\")\n\tret0, _ := ret[0].(*streams.Out)\n\treturn ret0\n}\n\n// Err indicates an expected call of Err.\nfunc (mr *MockCliMockRecorder) Err() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Err\", reflect.TypeOf((*MockCli)(nil).Err))\n}\n\n// In mocks base method.\nfunc (m *MockCli) In() *streams.In {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"In\")\n\tret0, _ := ret[0].(*streams.In)\n\treturn ret0\n}\n\n// In indicates an expected call of In.\nfunc (mr *MockCliMockRecorder) In() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"In\", reflect.TypeOf((*MockCli)(nil).In))\n}\n\n// MeterProvider mocks base method.\nfunc (m *MockCli) MeterProvider() metric.MeterProvider {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"MeterProvider\")\n\tret0, _ := ret[0].(metric.MeterProvider)\n\treturn ret0\n}\n\n// MeterProvider indicates an expected call of MeterProvider.\nfunc (mr *MockCliMockRecorder) MeterProvider() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"MeterProvider\", reflect.TypeOf((*MockCli)(nil).MeterProvider))\n}\n\n// Out mocks base method.\nfunc (m *MockCli) Out() *streams.Out {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Out\")\n\tret0, _ := ret[0].(*streams.Out)\n\treturn ret0\n}\n\n// Out indicates an expected call of Out.\nfunc (mr *MockCliMockRecorder) Out() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Out\", reflect.TypeOf((*MockCli)(nil).Out))\n}\n\n// Resource mocks base method.\nfunc (m *MockCli) Resource() *resource.Resource {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Resource\")\n\tret0, _ := ret[0].(*resource.Resource)\n\treturn ret0\n}\n\n// Resource indicates an expected call of Resource.\nfunc (mr *MockCliMockRecorder) Resource() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Resource\", reflect.TypeOf((*MockCli)(nil).Resource))\n}\n\n// ServerInfo mocks base method.\nfunc (m *MockCli) ServerInfo() command.ServerInfo {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServerInfo\")\n\tret0, _ := ret[0].(command.ServerInfo)\n\treturn ret0\n}\n\n// ServerInfo indicates an expected call of ServerInfo.\nfunc (mr *MockCliMockRecorder) ServerInfo() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServerInfo\", reflect.TypeOf((*MockCli)(nil).ServerInfo))\n}\n\n// SetIn mocks base method.\nfunc (m *MockCli) SetIn(arg0 *streams.In) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"SetIn\", arg0)\n}\n\n// SetIn indicates an expected call of SetIn.\nfunc (mr *MockCliMockRecorder) SetIn(arg0 any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SetIn\", reflect.TypeOf((*MockCli)(nil).SetIn), arg0)\n}\n\n// TracerProvider mocks base method.\nfunc (m *MockCli) TracerProvider() trace.TracerProvider {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"TracerProvider\")\n\tret0, _ := ret[0].(trace.TracerProvider)\n\treturn ret0\n}\n\n// TracerProvider indicates an expected call of TracerProvider.\nfunc (mr *MockCliMockRecorder) TracerProvider() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"TracerProvider\", reflect.TypeOf((*MockCli)(nil).TracerProvider))\n}\n"
  },
  {
    "path": "pkg/mocks/mock_docker_compose_api.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: ./pkg/api/api.go\n//\n// Generated by this command:\n//\n//\tmockgen -destination pkg/mocks/mock_docker_compose_api.go -package mocks -source=./pkg/api/api.go Service\n//\n\n// Package mocks is a generated GoMock package.\npackage mocks\n\nimport (\n\tcontext \"context\"\n\treflect \"reflect\"\n\n\ttypes \"github.com/compose-spec/compose-go/v2/types\"\n\tapi \"github.com/docker/compose/v5/pkg/api\"\n\tgomock \"go.uber.org/mock/gomock\"\n)\n\n// MockCompose is a mock of Compose interface.\ntype MockCompose struct {\n\tctrl     *gomock.Controller\n\trecorder *MockComposeMockRecorder\n}\n\n// MockComposeMockRecorder is the mock recorder for MockCompose.\ntype MockComposeMockRecorder struct {\n\tmock *MockCompose\n}\n\n// NewMockCompose creates a new mock instance.\nfunc NewMockCompose(ctrl *gomock.Controller) *MockCompose {\n\tmock := &MockCompose{ctrl: ctrl}\n\tmock.recorder = &MockComposeMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use.\nfunc (m *MockCompose) EXPECT() *MockComposeMockRecorder {\n\treturn m.recorder\n}\n\n// Attach mocks base method.\nfunc (m *MockCompose) Attach(ctx context.Context, projectName string, options api.AttachOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Attach\", ctx, projectName, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Attach indicates an expected call of Attach.\nfunc (mr *MockComposeMockRecorder) Attach(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Attach\", reflect.TypeOf((*MockCompose)(nil).Attach), ctx, projectName, options)\n}\n\n// Build mocks base method.\nfunc (m *MockCompose) Build(ctx context.Context, project *types.Project, options api.BuildOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Build\", ctx, project, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Build indicates an expected call of Build.\nfunc (mr *MockComposeMockRecorder) Build(ctx, project, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Build\", reflect.TypeOf((*MockCompose)(nil).Build), ctx, project, options)\n}\n\n// Commit mocks base method.\nfunc (m *MockCompose) Commit(ctx context.Context, projectName string, options api.CommitOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Commit\", ctx, projectName, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Commit indicates an expected call of Commit.\nfunc (mr *MockComposeMockRecorder) Commit(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Commit\", reflect.TypeOf((*MockCompose)(nil).Commit), ctx, projectName, options)\n}\n\n// Copy mocks base method.\nfunc (m *MockCompose) Copy(ctx context.Context, projectName string, options api.CopyOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Copy\", ctx, projectName, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Copy indicates an expected call of Copy.\nfunc (mr *MockComposeMockRecorder) Copy(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Copy\", reflect.TypeOf((*MockCompose)(nil).Copy), ctx, projectName, options)\n}\n\n// Create mocks base method.\nfunc (m *MockCompose) Create(ctx context.Context, project *types.Project, options api.CreateOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Create\", ctx, project, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Create indicates an expected call of Create.\nfunc (mr *MockComposeMockRecorder) Create(ctx, project, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Create\", reflect.TypeOf((*MockCompose)(nil).Create), ctx, project, options)\n}\n\n// Down mocks base method.\nfunc (m *MockCompose) Down(ctx context.Context, projectName string, options api.DownOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Down\", ctx, projectName, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Down indicates an expected call of Down.\nfunc (mr *MockComposeMockRecorder) Down(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Down\", reflect.TypeOf((*MockCompose)(nil).Down), ctx, projectName, options)\n}\n\n// Events mocks base method.\nfunc (m *MockCompose) Events(ctx context.Context, projectName string, options api.EventsOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Events\", ctx, projectName, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Events indicates an expected call of Events.\nfunc (mr *MockComposeMockRecorder) Events(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Events\", reflect.TypeOf((*MockCompose)(nil).Events), ctx, projectName, options)\n}\n\n// Exec mocks base method.\nfunc (m *MockCompose) Exec(ctx context.Context, projectName string, options api.RunOptions) (int, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Exec\", ctx, projectName, options)\n\tret0, _ := ret[0].(int)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Exec indicates an expected call of Exec.\nfunc (mr *MockComposeMockRecorder) Exec(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Exec\", reflect.TypeOf((*MockCompose)(nil).Exec), ctx, projectName, options)\n}\n\n// Export mocks base method.\nfunc (m *MockCompose) Export(ctx context.Context, projectName string, options api.ExportOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Export\", ctx, projectName, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Export indicates an expected call of Export.\nfunc (mr *MockComposeMockRecorder) Export(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Export\", reflect.TypeOf((*MockCompose)(nil).Export), ctx, projectName, options)\n}\n\n// Generate mocks base method.\nfunc (m *MockCompose) Generate(ctx context.Context, options api.GenerateOptions) (*types.Project, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Generate\", ctx, options)\n\tret0, _ := ret[0].(*types.Project)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Generate indicates an expected call of Generate.\nfunc (mr *MockComposeMockRecorder) Generate(ctx, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Generate\", reflect.TypeOf((*MockCompose)(nil).Generate), ctx, options)\n}\n\n// Images mocks base method.\nfunc (m *MockCompose) Images(ctx context.Context, projectName string, options api.ImagesOptions) (map[string]api.ImageSummary, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Images\", ctx, projectName, options)\n\tret0, _ := ret[0].(map[string]api.ImageSummary)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Images indicates an expected call of Images.\nfunc (mr *MockComposeMockRecorder) Images(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Images\", reflect.TypeOf((*MockCompose)(nil).Images), ctx, projectName, options)\n}\n\n// Kill mocks base method.\nfunc (m *MockCompose) Kill(ctx context.Context, projectName string, options api.KillOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Kill\", ctx, projectName, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Kill indicates an expected call of Kill.\nfunc (mr *MockComposeMockRecorder) Kill(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Kill\", reflect.TypeOf((*MockCompose)(nil).Kill), ctx, projectName, options)\n}\n\n// List mocks base method.\nfunc (m *MockCompose) List(ctx context.Context, options api.ListOptions) ([]api.Stack, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"List\", ctx, options)\n\tret0, _ := ret[0].([]api.Stack)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// List indicates an expected call of List.\nfunc (mr *MockComposeMockRecorder) List(ctx, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"List\", reflect.TypeOf((*MockCompose)(nil).List), ctx, options)\n}\n\n// LoadProject mocks base method.\nfunc (m *MockCompose) LoadProject(ctx context.Context, options api.ProjectLoadOptions) (*types.Project, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"LoadProject\", ctx, options)\n\tret0, _ := ret[0].(*types.Project)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// LoadProject indicates an expected call of LoadProject.\nfunc (mr *MockComposeMockRecorder) LoadProject(ctx, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"LoadProject\", reflect.TypeOf((*MockCompose)(nil).LoadProject), ctx, options)\n}\n\n// Logs mocks base method.\nfunc (m *MockCompose) Logs(ctx context.Context, projectName string, consumer api.LogConsumer, options api.LogOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Logs\", ctx, projectName, consumer, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Logs indicates an expected call of Logs.\nfunc (mr *MockComposeMockRecorder) Logs(ctx, projectName, consumer, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Logs\", reflect.TypeOf((*MockCompose)(nil).Logs), ctx, projectName, consumer, options)\n}\n\n// Pause mocks base method.\nfunc (m *MockCompose) Pause(ctx context.Context, projectName string, options api.PauseOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Pause\", ctx, projectName, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Pause indicates an expected call of Pause.\nfunc (mr *MockComposeMockRecorder) Pause(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Pause\", reflect.TypeOf((*MockCompose)(nil).Pause), ctx, projectName, options)\n}\n\n// Port mocks base method.\nfunc (m *MockCompose) Port(ctx context.Context, projectName, service string, port uint16, options api.PortOptions) (string, int, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Port\", ctx, projectName, service, port, options)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(int)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// Port indicates an expected call of Port.\nfunc (mr *MockComposeMockRecorder) Port(ctx, projectName, service, port, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Port\", reflect.TypeOf((*MockCompose)(nil).Port), ctx, projectName, service, port, options)\n}\n\n// Ps mocks base method.\nfunc (m *MockCompose) Ps(ctx context.Context, projectName string, options api.PsOptions) ([]api.ContainerSummary, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Ps\", ctx, projectName, options)\n\tret0, _ := ret[0].([]api.ContainerSummary)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Ps indicates an expected call of Ps.\nfunc (mr *MockComposeMockRecorder) Ps(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Ps\", reflect.TypeOf((*MockCompose)(nil).Ps), ctx, projectName, options)\n}\n\n// Publish mocks base method.\nfunc (m *MockCompose) Publish(ctx context.Context, project *types.Project, repository string, options api.PublishOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Publish\", ctx, project, repository, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Publish indicates an expected call of Publish.\nfunc (mr *MockComposeMockRecorder) Publish(ctx, project, repository, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Publish\", reflect.TypeOf((*MockCompose)(nil).Publish), ctx, project, repository, options)\n}\n\n// Pull mocks base method.\nfunc (m *MockCompose) Pull(ctx context.Context, project *types.Project, options api.PullOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Pull\", ctx, project, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Pull indicates an expected call of Pull.\nfunc (mr *MockComposeMockRecorder) Pull(ctx, project, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Pull\", reflect.TypeOf((*MockCompose)(nil).Pull), ctx, project, options)\n}\n\n// Push mocks base method.\nfunc (m *MockCompose) Push(ctx context.Context, project *types.Project, options api.PushOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Push\", ctx, project, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Push indicates an expected call of Push.\nfunc (mr *MockComposeMockRecorder) Push(ctx, project, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Push\", reflect.TypeOf((*MockCompose)(nil).Push), ctx, project, options)\n}\n\n// Remove mocks base method.\nfunc (m *MockCompose) Remove(ctx context.Context, projectName string, options api.RemoveOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Remove\", ctx, projectName, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Remove indicates an expected call of Remove.\nfunc (mr *MockComposeMockRecorder) Remove(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Remove\", reflect.TypeOf((*MockCompose)(nil).Remove), ctx, projectName, options)\n}\n\n// Restart mocks base method.\nfunc (m *MockCompose) Restart(ctx context.Context, projectName string, options api.RestartOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Restart\", ctx, projectName, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Restart indicates an expected call of Restart.\nfunc (mr *MockComposeMockRecorder) Restart(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Restart\", reflect.TypeOf((*MockCompose)(nil).Restart), ctx, projectName, options)\n}\n\n// RunOneOffContainer mocks base method.\nfunc (m *MockCompose) RunOneOffContainer(ctx context.Context, project *types.Project, opts api.RunOptions) (int, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"RunOneOffContainer\", ctx, project, opts)\n\tret0, _ := ret[0].(int)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// RunOneOffContainer indicates an expected call of RunOneOffContainer.\nfunc (mr *MockComposeMockRecorder) RunOneOffContainer(ctx, project, opts any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RunOneOffContainer\", reflect.TypeOf((*MockCompose)(nil).RunOneOffContainer), ctx, project, opts)\n}\n\n// Scale mocks base method.\nfunc (m *MockCompose) Scale(ctx context.Context, project *types.Project, options api.ScaleOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Scale\", ctx, project, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Scale indicates an expected call of Scale.\nfunc (mr *MockComposeMockRecorder) Scale(ctx, project, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Scale\", reflect.TypeOf((*MockCompose)(nil).Scale), ctx, project, options)\n}\n\n// Start mocks base method.\nfunc (m *MockCompose) Start(ctx context.Context, projectName string, options api.StartOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Start\", ctx, projectName, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Start indicates an expected call of Start.\nfunc (mr *MockComposeMockRecorder) Start(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Start\", reflect.TypeOf((*MockCompose)(nil).Start), ctx, projectName, options)\n}\n\n// Stop mocks base method.\nfunc (m *MockCompose) Stop(ctx context.Context, projectName string, options api.StopOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Stop\", ctx, projectName, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Stop indicates an expected call of Stop.\nfunc (mr *MockComposeMockRecorder) Stop(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Stop\", reflect.TypeOf((*MockCompose)(nil).Stop), ctx, projectName, options)\n}\n\n// Top mocks base method.\nfunc (m *MockCompose) Top(ctx context.Context, projectName string, services []string) ([]api.ContainerProcSummary, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Top\", ctx, projectName, services)\n\tret0, _ := ret[0].([]api.ContainerProcSummary)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Top indicates an expected call of Top.\nfunc (mr *MockComposeMockRecorder) Top(ctx, projectName, services any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Top\", reflect.TypeOf((*MockCompose)(nil).Top), ctx, projectName, services)\n}\n\n// UnPause mocks base method.\nfunc (m *MockCompose) UnPause(ctx context.Context, projectName string, options api.PauseOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"UnPause\", ctx, projectName, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// UnPause indicates an expected call of UnPause.\nfunc (mr *MockComposeMockRecorder) UnPause(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UnPause\", reflect.TypeOf((*MockCompose)(nil).UnPause), ctx, projectName, options)\n}\n\n// Up mocks base method.\nfunc (m *MockCompose) Up(ctx context.Context, project *types.Project, options api.UpOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Up\", ctx, project, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Up indicates an expected call of Up.\nfunc (mr *MockComposeMockRecorder) Up(ctx, project, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Up\", reflect.TypeOf((*MockCompose)(nil).Up), ctx, project, options)\n}\n\n// Viz mocks base method.\nfunc (m *MockCompose) Viz(ctx context.Context, project *types.Project, options api.VizOptions) (string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Viz\", ctx, project, options)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Viz indicates an expected call of Viz.\nfunc (mr *MockComposeMockRecorder) Viz(ctx, project, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Viz\", reflect.TypeOf((*MockCompose)(nil).Viz), ctx, project, options)\n}\n\n// Volumes mocks base method.\nfunc (m *MockCompose) Volumes(ctx context.Context, project string, options api.VolumesOptions) ([]api.VolumesSummary, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Volumes\", ctx, project, options)\n\tret0, _ := ret[0].([]api.VolumesSummary)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Volumes indicates an expected call of Volumes.\nfunc (mr *MockComposeMockRecorder) Volumes(ctx, project, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Volumes\", reflect.TypeOf((*MockCompose)(nil).Volumes), ctx, project, options)\n}\n\n// Wait mocks base method.\nfunc (m *MockCompose) Wait(ctx context.Context, projectName string, options api.WaitOptions) (int64, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Wait\", ctx, projectName, options)\n\tret0, _ := ret[0].(int64)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Wait indicates an expected call of Wait.\nfunc (mr *MockComposeMockRecorder) Wait(ctx, projectName, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Wait\", reflect.TypeOf((*MockCompose)(nil).Wait), ctx, projectName, options)\n}\n\n// Watch mocks base method.\nfunc (m *MockCompose) Watch(ctx context.Context, project *types.Project, options api.WatchOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Watch\", ctx, project, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Watch indicates an expected call of Watch.\nfunc (mr *MockComposeMockRecorder) Watch(ctx, project, options any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Watch\", reflect.TypeOf((*MockCompose)(nil).Watch), ctx, project, options)\n}\n\n// MockLogConsumer is a mock of LogConsumer interface.\ntype MockLogConsumer struct {\n\tctrl     *gomock.Controller\n\trecorder *MockLogConsumerMockRecorder\n}\n\n// MockLogConsumerMockRecorder is the mock recorder for MockLogConsumer.\ntype MockLogConsumerMockRecorder struct {\n\tmock *MockLogConsumer\n}\n\n// NewMockLogConsumer creates a new mock instance.\nfunc NewMockLogConsumer(ctrl *gomock.Controller) *MockLogConsumer {\n\tmock := &MockLogConsumer{ctrl: ctrl}\n\tmock.recorder = &MockLogConsumerMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use.\nfunc (m *MockLogConsumer) EXPECT() *MockLogConsumerMockRecorder {\n\treturn m.recorder\n}\n\n// Err mocks base method.\nfunc (m *MockLogConsumer) Err(containerName, message string) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"Err\", containerName, message)\n}\n\n// Err indicates an expected call of Err.\nfunc (mr *MockLogConsumerMockRecorder) Err(containerName, message any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Err\", reflect.TypeOf((*MockLogConsumer)(nil).Err), containerName, message)\n}\n\n// Log mocks base method.\nfunc (m *MockLogConsumer) Log(containerName, message string) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"Log\", containerName, message)\n}\n\n// Log indicates an expected call of Log.\nfunc (mr *MockLogConsumerMockRecorder) Log(containerName, message any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Log\", reflect.TypeOf((*MockLogConsumer)(nil).Log), containerName, message)\n}\n\n// Status mocks base method.\nfunc (m *MockLogConsumer) Status(container, msg string) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"Status\", container, msg)\n}\n\n// Status indicates an expected call of Status.\nfunc (mr *MockLogConsumerMockRecorder) Status(container, msg any) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Status\", reflect.TypeOf((*MockLogConsumer)(nil).Status), container, msg)\n}\n"
  },
  {
    "path": "pkg/remote/cache.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage remote\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n)\n\nfunc cacheDir() (string, error) {\n\tcache, ok := os.LookupEnv(\"XDG_CACHE_HOME\")\n\tif ok {\n\t\treturn filepath.Join(cache, \"docker-compose\"), nil\n\t}\n\n\tpath, err := osDependentCacheDir()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\terr = os.MkdirAll(path, 0o700)\n\treturn path, err\n}\n"
  },
  {
    "path": "pkg/remote/cache_darwin.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage remote\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n)\n\n// Based on https://github.com/adrg/xdg\n// Licensed under MIT License (MIT)\n// Copyright (c) 2014 Adrian-George Bostan <adrg@epistack.com>\n\nfunc osDependentCacheDir() (string, error) {\n\thome, err := os.UserHomeDir()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn filepath.Join(home, \"Library\", \"Caches\", \"docker-compose\"), nil\n}\n"
  },
  {
    "path": "pkg/remote/cache_unix.go",
    "content": "//go:build linux || openbsd || freebsd\n\n/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage remote\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n)\n\n// Based on https://github.com/adrg/xdg\n// Licensed under MIT License (MIT)\n// Copyright (c) 2014 Adrian-George Bostan <adrg@epistack.com>\n\nfunc osDependentCacheDir() (string, error) {\n\thome, err := os.UserHomeDir()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn filepath.Join(home, \".cache\", \"docker-compose\"), nil\n}\n"
  },
  {
    "path": "pkg/remote/cache_windows.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage remote\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\n\t\"golang.org/x/sys/windows\"\n)\n\n// Based on https://github.com/adrg/xdg\n// Licensed under MIT License (MIT)\n// Copyright (c) 2014 Adrian-George Bostan <adrg@epistack.com>\n\nfunc osDependentCacheDir() (string, error) {\n\tflags := []uint32{windows.KF_FLAG_DEFAULT, windows.KF_FLAG_DEFAULT_PATH}\n\tfor _, flag := range flags {\n\t\tp, _ := windows.KnownFolderPath(windows.FOLDERID_LocalAppData, flag|windows.KF_FLAG_DONT_VERIFY)\n\t\tif p != \"\" {\n\t\t\treturn filepath.Join(p, \"cache\", \"docker-compose\"), nil\n\t\t}\n\t}\n\n\tappData, ok := os.LookupEnv(\"LOCALAPPDATA\")\n\tif ok {\n\t\treturn filepath.Join(appData, \"cache\", \"docker-compose\"), nil\n\t}\n\n\thome, err := os.UserHomeDir()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn filepath.Join(home, \"AppData\", \"Local\", \"cache\", \"docker-compose\"), nil\n}\n"
  },
  {
    "path": "pkg/remote/git.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage remote\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/cli\"\n\t\"github.com/compose-spec/compose-go/v2/loader\"\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/docker/cli/cli/command\"\n\tgitutil \"github.com/moby/buildkit/frontend/dockerfile/dfgitutil\"\n\t\"github.com/sirupsen/logrus\"\n\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nconst GIT_REMOTE_ENABLED = \"COMPOSE_EXPERIMENTAL_GIT_REMOTE\"\n\nfunc gitRemoteLoaderEnabled() (bool, error) {\n\tif v := os.Getenv(GIT_REMOTE_ENABLED); v != \"\" {\n\t\tenabled, err := strconv.ParseBool(v)\n\t\tif err != nil {\n\t\t\treturn false, fmt.Errorf(\"COMPOSE_EXPERIMENTAL_GIT_REMOTE environment variable expects boolean value: %w\", err)\n\t\t}\n\t\treturn enabled, err\n\t}\n\treturn true, nil\n}\n\nfunc NewGitRemoteLoader(dockerCli command.Cli, offline bool) loader.ResourceLoader {\n\treturn gitRemoteLoader{\n\t\tdockerCli: dockerCli,\n\t\toffline:   offline,\n\t\tknown:     map[string]string{},\n\t}\n}\n\ntype gitRemoteLoader struct {\n\tdockerCli command.Cli\n\toffline   bool\n\tknown     map[string]string\n}\n\nfunc (g gitRemoteLoader) Accept(path string) bool {\n\t_, _, err := gitutil.ParseGitRef(path)\n\treturn err == nil\n}\n\nvar commitSHA = regexp.MustCompile(`^[a-f0-9]{40}$`)\n\nfunc (g gitRemoteLoader) Load(ctx context.Context, path string) (string, error) {\n\tenabled, err := gitRemoteLoaderEnabled()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tif !enabled {\n\t\treturn \"\", fmt.Errorf(\"git remote resource is disabled by %q\", GIT_REMOTE_ENABLED)\n\t}\n\n\tref, _, err := gitutil.ParseGitRef(path)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tlocal, ok := g.known[path]\n\tif !ok {\n\t\tif ref.Ref == \"\" {\n\t\t\tref.Ref = \"HEAD\" // default branch\n\t\t}\n\n\t\terr = g.resolveGitRef(ctx, path, ref)\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\n\t\tcache, err := cacheDir()\n\t\tif err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"initializing remote resource cache: %w\", err)\n\t\t}\n\n\t\tlocal = filepath.Join(cache, ref.Ref)\n\t\tif _, err := os.Stat(local); os.IsNotExist(err) {\n\t\t\tif g.offline {\n\t\t\t\treturn \"\", nil\n\t\t\t}\n\t\t\terr = g.checkout(ctx, local, ref)\n\t\t\tif err != nil {\n\t\t\t\treturn \"\", err\n\t\t\t}\n\t\t}\n\t\tg.known[path] = local\n\t}\n\tif ref.SubDir != \"\" {\n\t\tif err := validateGitSubDir(local, ref.SubDir); err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\tlocal = filepath.Join(local, ref.SubDir)\n\t}\n\tstat, err := os.Stat(local)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tif stat.IsDir() {\n\t\tlocal, err = findFile(cli.DefaultFileNames, local)\n\t}\n\treturn local, err\n}\n\nfunc (g gitRemoteLoader) Dir(path string) string {\n\treturn g.known[path]\n}\n\n// validateGitSubDir ensures a subdirectory path is contained within the base directory\n// and doesn't escape via path traversal. Unlike validatePathInBase for OCI artifacts,\n// this allows nested directories but prevents traversal outside the base.\nfunc validateGitSubDir(base, subDir string) error {\n\tcleanSubDir := filepath.Clean(subDir)\n\n\tif filepath.IsAbs(cleanSubDir) {\n\t\treturn fmt.Errorf(\"git subdirectory must be relative, got: %s\", subDir)\n\t}\n\n\tif cleanSubDir == \"..\" || strings.HasPrefix(cleanSubDir, \"../\") || strings.HasPrefix(cleanSubDir, \"..\\\\\") {\n\t\treturn fmt.Errorf(\"git subdirectory path traversal detected: %s\", subDir)\n\t}\n\n\tif len(cleanSubDir) >= 2 && cleanSubDir[1] == ':' {\n\t\treturn fmt.Errorf(\"git subdirectory must be relative, got: %s\", subDir)\n\t}\n\n\ttargetPath := filepath.Join(base, cleanSubDir)\n\tcleanBase := filepath.Clean(base)\n\tcleanTarget := filepath.Clean(targetPath)\n\n\t// Ensure the target starts with the base path\n\trelPath, err := filepath.Rel(cleanBase, cleanTarget)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"invalid git subdirectory path: %w\", err)\n\t}\n\n\tif relPath == \"..\" || strings.HasPrefix(relPath, \"../\") || strings.HasPrefix(relPath, \"..\\\\\") {\n\t\treturn fmt.Errorf(\"git subdirectory escapes base directory: %s\", subDir)\n\t}\n\n\treturn nil\n}\n\nfunc (g gitRemoteLoader) resolveGitRef(ctx context.Context, path string, ref *gitutil.GitRef) error {\n\tif !commitSHA.MatchString(ref.Ref) {\n\t\tcmd := exec.CommandContext(ctx, \"git\", \"ls-remote\", \"--exit-code\", ref.Remote, ref.Ref)\n\t\tcmd.Env = g.gitCommandEnv()\n\t\tout, err := cmd.CombinedOutput()\n\t\tif err != nil {\n\t\t\tif cmd.ProcessState.ExitCode() == 2 {\n\t\t\t\treturn fmt.Errorf(\"repository does not contain ref %s, output: %q: %w\", path, string(out), err)\n\t\t\t}\n\t\t\treturn fmt.Errorf(\"failed to access repository at %s:\\n %s\", ref.Remote, out)\n\t\t}\n\t\tif len(out) < 40 {\n\t\t\treturn fmt.Errorf(\"unexpected git command output: %q\", string(out))\n\t\t}\n\t\tsha := string(out[:40])\n\t\tif !commitSHA.MatchString(sha) {\n\t\t\treturn fmt.Errorf(\"invalid commit sha %q\", sha)\n\t\t}\n\t\tref.Ref = sha\n\t}\n\treturn nil\n}\n\nfunc (g gitRemoteLoader) checkout(ctx context.Context, path string, ref *gitutil.GitRef) error {\n\terr := os.MkdirAll(path, 0o700)\n\tif err != nil {\n\t\treturn err\n\t}\n\terr = exec.CommandContext(ctx, \"git\", \"init\", path).Run()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tcmd := exec.CommandContext(ctx, \"git\", \"remote\", \"add\", \"origin\", ref.Remote)\n\tcmd.Dir = path\n\terr = cmd.Run()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tcmd = exec.CommandContext(ctx, \"git\", \"fetch\", \"--depth=1\", \"origin\", ref.Ref)\n\tcmd.Env = g.gitCommandEnv()\n\tcmd.Dir = path\n\n\terr = g.run(cmd)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tcmd = exec.CommandContext(ctx, \"git\", \"checkout\", ref.Ref)\n\tcmd.Dir = path\n\terr = cmd.Run()\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc (g gitRemoteLoader) run(cmd *exec.Cmd) error {\n\tif logrus.IsLevelEnabled(logrus.DebugLevel) {\n\t\toutput, err := cmd.CombinedOutput()\n\t\tscanner := bufio.NewScanner(bytes.NewBuffer(output))\n\t\tfor scanner.Scan() {\n\t\t\tline := scanner.Text()\n\t\t\tlogrus.Debug(line)\n\t\t}\n\t\treturn err\n\t}\n\treturn cmd.Run()\n}\n\nfunc (g gitRemoteLoader) gitCommandEnv() []string {\n\tenv := types.NewMapping(os.Environ())\n\tif env[\"GIT_TERMINAL_PROMPT\"] == \"\" {\n\t\t// Disable prompting for passwords by Git until user explicitly asks for it.\n\t\tenv[\"GIT_TERMINAL_PROMPT\"] = \"0\"\n\t}\n\tif env[\"GIT_SSH\"] == \"\" && env[\"GIT_SSH_COMMAND\"] == \"\" {\n\t\t// Disable any ssh connection pooling by Git and do not attempt to prompt the user.\n\t\tenv[\"GIT_SSH_COMMAND\"] = \"ssh -o ControlMaster=no -o BatchMode=yes\"\n\t}\n\tv := env.Values()\n\treturn v\n}\n\nfunc findFile(names []string, pwd string) (string, error) {\n\tfor _, n := range names {\n\t\tf := filepath.Join(pwd, n)\n\t\tif fi, err := os.Stat(f); err == nil && !fi.IsDir() {\n\t\t\treturn f, nil\n\t\t}\n\t}\n\treturn \"\", api.ErrNotFound\n}\n\nvar _ loader.ResourceLoader = gitRemoteLoader{}\n"
  },
  {
    "path": "pkg/remote/git_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage remote\n\nimport (\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc TestValidateGitSubDir(t *testing.T) {\n\tbase := \"/tmp/cache/compose/abc123def456\"\n\n\ttests := []struct {\n\t\tname    string\n\t\tsubDir  string\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname:    \"valid simple directory\",\n\t\t\tsubDir:  \"examples\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:    \"valid nested directory\",\n\t\t\tsubDir:  \"examples/nginx\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:    \"valid deeply nested directory\",\n\t\t\tsubDir:  \"examples/web/frontend/config\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:    \"valid current directory\",\n\t\t\tsubDir:  \".\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:    \"valid directory with redundant separators\",\n\t\t\tsubDir:  \"examples//nginx\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:    \"valid directory with dots in name\",\n\t\t\tsubDir:  \"examples/nginx.conf.d\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:    \"path traversal - parent directory\",\n\t\t\tsubDir:  \"..\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"path traversal - multiple parent directories\",\n\t\t\tsubDir:  \"../../../etc/passwd\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"path traversal - deeply nested escape\",\n\t\t\tsubDir:  \"../../../../../../../tmp/pwned\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"path traversal - mixed with valid path\",\n\t\t\tsubDir:  \"examples/../../etc/passwd\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"path traversal - at the end\",\n\t\t\tsubDir:  \"examples/..\",\n\t\t\twantErr: false, // This resolves to \".\" which is the current directory, safe\n\t\t},\n\t\t{\n\t\t\tname:    \"path traversal - in the middle\",\n\t\t\tsubDir:  \"examples/../../../etc/passwd\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"path traversal - windows style\",\n\t\t\tsubDir:  \"..\\\\..\\\\..\\\\windows\\\\system32\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"absolute unix path\",\n\t\t\tsubDir:  \"/etc/passwd\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"absolute windows path\",\n\t\t\tsubDir:  \"C:\\\\windows\\\\system32\\\\config\\\\sam\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"absolute path with home directory\",\n\t\t\tsubDir:  \"/home/user/.ssh/id_rsa\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"normalized path that would escape\",\n\t\t\tsubDir:  \"./../../etc/passwd\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"directory name with three dots\",\n\t\t\tsubDir:  \".../config\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:    \"directory name with four dots\",\n\t\t\tsubDir:  \"..../config\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:    \"directory name with five dots\",\n\t\t\tsubDir:  \"...../etc/passwd\",\n\t\t\twantErr: false, // \".....'' is a valid directory name, not path traversal\n\t\t},\n\t\t{\n\t\t\tname:    \"directory name starting with two dots and letter\",\n\t\t\tsubDir:  \"..foo/bar\",\n\t\t\twantErr: false,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\terr := validateGitSubDir(base, tt.subDir)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"validateGitSubDir(%q, %q) error = %v, wantErr %v\",\n\t\t\t\t\tbase, tt.subDir, err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n\n// TestValidateGitSubDirSecurityScenarios tests specific security scenarios\nfunc TestValidateGitSubDirSecurityScenarios(t *testing.T) {\n\tbase := \"/var/cache/docker-compose/git/1234567890abcdef\"\n\n\t// Test the exact vulnerability scenario from the issue\n\tt.Run(\"CVE scenario - /tmp traversal\", func(t *testing.T) {\n\t\tmaliciousPath := \"../../../../../../../tmp/pwned\"\n\t\terr := validateGitSubDir(base, maliciousPath)\n\t\tassert.ErrorContains(t, err, \"path traversal\")\n\t})\n\n\t// Test variations of the attack\n\tt.Run(\"CVE scenario - /etc traversal\", func(t *testing.T) {\n\t\tmaliciousPath := \"../../../../../../../../etc/passwd\"\n\t\terr := validateGitSubDir(base, maliciousPath)\n\t\tassert.ErrorContains(t, err, \"path traversal\")\n\t})\n\n\t// Test that legitimate nested paths still work\n\tt.Run(\"legitimate nested path\", func(t *testing.T) {\n\t\tvalidPath := \"examples/docker-compose/nginx/config\"\n\t\terr := validateGitSubDir(base, validPath)\n\t\tassert.NilError(t, err)\n\t})\n}\n"
  },
  {
    "path": "pkg/remote/oci.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage remote\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/loader\"\n\t\"github.com/containerd/containerd/v2/core/images\"\n\t\"github.com/containerd/containerd/v2/core/remotes\"\n\t\"github.com/distribution/reference\"\n\t\"github.com/docker/cli/cli/command\"\n\tspec \"github.com/opencontainers/image-spec/specs-go/v1\"\n\n\t\"github.com/docker/compose/v5/internal/oci\"\n\t\"github.com/docker/compose/v5/pkg/api\"\n)\n\nconst (\n\tOCI_REMOTE_ENABLED = \"COMPOSE_EXPERIMENTAL_OCI_REMOTE\"\n\tOciPrefix          = \"oci://\"\n)\n\n// validatePathInBase ensures a file path is contained within the base directory,\n// as OCI artifacts resources must all live within the same folder.\nfunc validatePathInBase(base, unsafePath string) error {\n\t// Reject paths with path separators regardless of OS\n\tif strings.ContainsAny(unsafePath, \"\\\\/\") {\n\t\treturn fmt.Errorf(\"invalid OCI artifact\")\n\t}\n\n\t// Join the base with the untrusted path\n\ttargetPath := filepath.Join(base, unsafePath)\n\n\t// Get the directory of the target path\n\ttargetDir := filepath.Dir(targetPath)\n\n\t// Clean both paths to resolve any .. or . components\n\tcleanBase := filepath.Clean(base)\n\tcleanTargetDir := filepath.Clean(targetDir)\n\n\t// Check if the target directory is the same as base directory\n\tif cleanTargetDir != cleanBase {\n\t\treturn fmt.Errorf(\"invalid OCI artifact\")\n\t}\n\n\treturn nil\n}\n\nfunc ociRemoteLoaderEnabled() (bool, error) {\n\tif v := os.Getenv(OCI_REMOTE_ENABLED); v != \"\" {\n\t\tenabled, err := strconv.ParseBool(v)\n\t\tif err != nil {\n\t\t\treturn false, fmt.Errorf(\"COMPOSE_EXPERIMENTAL_OCI_REMOTE environment variable expects boolean value: %w\", err)\n\t\t}\n\t\treturn enabled, err\n\t}\n\treturn true, nil\n}\n\nfunc NewOCIRemoteLoader(dockerCli command.Cli, offline bool, options api.OCIOptions) loader.ResourceLoader {\n\treturn ociRemoteLoader{\n\t\tdockerCli:          dockerCli,\n\t\toffline:            offline,\n\t\tknown:              map[string]string{},\n\t\tinsecureRegistries: options.InsecureRegistries,\n\t}\n}\n\ntype ociRemoteLoader struct {\n\tdockerCli          command.Cli\n\toffline            bool\n\tknown              map[string]string\n\tinsecureRegistries []string\n}\n\nfunc (g ociRemoteLoader) Accept(path string) bool {\n\treturn strings.HasPrefix(path, OciPrefix)\n}\n\n//nolint:gocyclo\nfunc (g ociRemoteLoader) Load(ctx context.Context, path string) (string, error) {\n\tenabled, err := ociRemoteLoaderEnabled()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tif !enabled {\n\t\treturn \"\", fmt.Errorf(\"OCI remote resource is disabled by %q\", OCI_REMOTE_ENABLED)\n\t}\n\n\tif g.offline {\n\t\treturn \"\", nil\n\t}\n\n\tlocal, ok := g.known[path]\n\tif !ok {\n\t\tref, err := reference.ParseDockerRef(path[len(OciPrefix):])\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\n\t\tresolver := oci.NewResolver(g.dockerCli.ConfigFile(), g.insecureRegistries...)\n\n\t\tdescriptor, content, err := oci.Get(ctx, resolver, ref)\n\t\tif err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"failed to pull OCI resource %q: %w\", ref, err)\n\t\t}\n\n\t\tcache, err := cacheDir()\n\t\tif err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"initializing remote resource cache: %w\", err)\n\t\t}\n\n\t\tlocal = filepath.Join(cache, descriptor.Digest.Hex())\n\t\tif _, err = os.Stat(local); os.IsNotExist(err) {\n\n\t\t\t// a Compose application bundle is published as image index\n\t\t\tif images.IsIndexType(descriptor.MediaType) {\n\t\t\t\tvar index spec.Index\n\t\t\t\terr = json.Unmarshal(content, &index)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn \"\", err\n\t\t\t\t}\n\t\t\t\tfound := false\n\t\t\t\tfor _, manifest := range index.Manifests {\n\t\t\t\t\tif manifest.ArtifactType != oci.ComposeProjectArtifactType {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t\tfound = true\n\t\t\t\t\tdigested, err := reference.WithDigest(ref, manifest.Digest)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn \"\", err\n\t\t\t\t\t}\n\t\t\t\t\tdescriptor, content, err = oci.Get(ctx, resolver, digested)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn \"\", fmt.Errorf(\"failed to pull OCI resource %q: %w\", ref, err)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif !found {\n\t\t\t\t\treturn \"\", fmt.Errorf(\"OCI index %s doesn't refer to compose artifacts\", ref)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tvar manifest spec.Manifest\n\t\t\terr = json.Unmarshal(content, &manifest)\n\t\t\tif err != nil {\n\t\t\t\treturn \"\", err\n\t\t\t}\n\n\t\t\terr = g.pullComposeFiles(ctx, local, manifest, ref, resolver)\n\t\t\tif err != nil {\n\t\t\t\t// we need to clean up the directory to be sure we won't let empty files present\n\t\t\t\t_ = os.RemoveAll(local)\n\t\t\t\treturn \"\", err\n\t\t\t}\n\t\t}\n\t\tg.known[path] = local\n\t}\n\treturn filepath.Join(local, \"compose.yaml\"), nil\n}\n\nfunc (g ociRemoteLoader) Dir(path string) string {\n\treturn g.known[path]\n}\n\nfunc (g ociRemoteLoader) pullComposeFiles(ctx context.Context, local string, manifest spec.Manifest, ref reference.Named, resolver remotes.Resolver) error {\n\terr := os.MkdirAll(local, 0o700)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif (manifest.ArtifactType != \"\" && manifest.ArtifactType != oci.ComposeProjectArtifactType) ||\n\t\t(manifest.ArtifactType == \"\" && manifest.Config.MediaType != oci.ComposeEmptyConfigMediaType) {\n\t\treturn fmt.Errorf(\"%s is not a compose project OCI artifact, but %s\", ref.String(), manifest.ArtifactType)\n\t}\n\n\tfor i, layer := range manifest.Layers {\n\t\tdigested, err := reference.WithDigest(ref, layer.Digest)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\t_, content, err := oci.Get(ctx, resolver, digested)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tswitch layer.MediaType {\n\t\tcase oci.ComposeYAMLMediaType:\n\t\t\tif err := writeComposeFile(layer, i, local, content); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\tcase oci.ComposeEnvFileMediaType:\n\t\t\tif err := writeEnvFile(layer, local, content); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\tcase oci.ComposeEmptyConfigMediaType:\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc writeComposeFile(layer spec.Descriptor, i int, local string, content []byte) error {\n\tfile := \"compose.yaml\"\n\tif _, ok := layer.Annotations[\"com.docker.compose.extends\"]; ok {\n\t\tfile = layer.Annotations[\"com.docker.compose.file\"]\n\t\tif err := validatePathInBase(local, file); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tf, err := os.OpenFile(filepath.Join(local, file), os.O_RDWR|os.O_CREATE|os.O_APPEND, 0o600)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer func() { _ = f.Close() }()\n\tif _, ok := layer.Annotations[\"com.docker.compose.file\"]; i > 0 && ok {\n\t\t_, err := f.Write([]byte(\"\\n---\\n\"))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\t_, err = f.Write(content)\n\treturn err\n}\n\nfunc writeEnvFile(layer spec.Descriptor, local string, content []byte) error {\n\tenvfilePath, ok := layer.Annotations[\"com.docker.compose.envfile\"]\n\tif !ok {\n\t\treturn fmt.Errorf(\"missing annotation com.docker.compose.envfile in layer %q\", layer.Digest)\n\t}\n\tif err := validatePathInBase(local, envfilePath); err != nil {\n\t\treturn err\n\t}\n\totherFile, err := os.Create(filepath.Join(local, envfilePath))\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer func() { _ = otherFile.Close() }()\n\t_, err = otherFile.Write(content)\n\treturn err\n}\n\nvar _ loader.ResourceLoader = ociRemoteLoader{}\n"
  },
  {
    "path": "pkg/remote/oci_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage remote\n\nimport (\n\t\"path/filepath\"\n\t\"testing\"\n\n\tspec \"github.com/opencontainers/image-spec/specs-go/v1\"\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc TestValidatePathInBase(t *testing.T) {\n\tbase := \"/tmp/cache/compose\"\n\n\ttests := []struct {\n\t\tname       string\n\t\tunsafePath string\n\t\twantErr    bool\n\t}{\n\t\t{\n\t\t\tname:       \"valid simple filename\",\n\t\t\tunsafePath: \"compose.yaml\",\n\t\t\twantErr:    false,\n\t\t},\n\t\t{\n\t\t\tname:       \"valid hashed filename\",\n\t\t\tunsafePath: \"f8f9ede3d201ec37d5a5e3a77bbadab79af26035e53135e19571f50d541d390c.yaml\",\n\t\t\twantErr:    false,\n\t\t},\n\t\t{\n\t\t\tname:       \"valid env file\",\n\t\t\tunsafePath: \".env\",\n\t\t\twantErr:    false,\n\t\t},\n\t\t{\n\t\t\tname:       \"valid env file with suffix\",\n\t\t\tunsafePath: \".env.prod\",\n\t\t\twantErr:    false,\n\t\t},\n\t\t{\n\t\t\tname:       \"unix path traversal\",\n\t\t\tunsafePath: \"../../../etc/passwd\",\n\t\t\twantErr:    true,\n\t\t},\n\t\t{\n\t\t\tname:       \"windows path traversal\",\n\t\t\tunsafePath: \"..\\\\..\\\\..\\\\windows\\\\system32\\\\config\\\\sam\",\n\t\t\twantErr:    true,\n\t\t},\n\t\t{\n\t\t\tname:       \"subdirectory unix\",\n\t\t\tunsafePath: \"config/base.yaml\",\n\t\t\twantErr:    true,\n\t\t},\n\t\t{\n\t\t\tname:       \"subdirectory windows\",\n\t\t\tunsafePath: \"config\\\\base.yaml\",\n\t\t\twantErr:    true,\n\t\t},\n\t\t{\n\t\t\tname:       \"absolute unix path\",\n\t\t\tunsafePath: \"/etc/passwd\",\n\t\t\twantErr:    true,\n\t\t},\n\t\t{\n\t\t\tname:       \"absolute windows path\",\n\t\t\tunsafePath: \"C:\\\\windows\\\\system32\\\\config\\\\sam\",\n\t\t\twantErr:    true,\n\t\t},\n\t\t{\n\t\t\tname:       \"parent reference only\",\n\t\t\tunsafePath: \"..\",\n\t\t\twantErr:    true,\n\t\t},\n\t\t{\n\t\t\tname:       \"mixed separators\",\n\t\t\tunsafePath: \"config/sub\\\\file.yaml\",\n\t\t\twantErr:    true,\n\t\t},\n\t\t{\n\t\t\tname:       \"filename with spaces\",\n\t\t\tunsafePath: \"my file.yaml\",\n\t\t\twantErr:    false,\n\t\t},\n\t\t{\n\t\t\tname:       \"filename with special chars\",\n\t\t\tunsafePath: \"file-name_v1.2.3.yaml\",\n\t\t\twantErr:    false,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\terr := validatePathInBase(base, tt.unsafePath)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\ttargetPath := filepath.Join(base, tt.unsafePath)\n\t\t\t\ttargetDir := filepath.Dir(targetPath)\n\t\t\t\tt.Errorf(\"validatePathInBase(%q, %q) error = %v, wantErr %v\\ntargetDir=%q base=%q\",\n\t\t\t\t\tbase, tt.unsafePath, err, tt.wantErr, targetDir, base)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestWriteComposeFileWithExtendsPathTraversal(t *testing.T) {\n\ttmpDir := t.TempDir()\n\n\t// Create a layer with com.docker.compose.extends=true and a path traversal attempt\n\tlayer := spec.Descriptor{\n\t\tMediaType: \"application/vnd.docker.compose.file.v1+yaml\",\n\t\tDigest:    \"sha256:test123\",\n\t\tSize:      100,\n\t\tAnnotations: map[string]string{\n\t\t\t\"com.docker.compose.extends\": \"true\",\n\t\t\t\"com.docker.compose.file\":    \"../other\",\n\t\t},\n\t}\n\n\tcontent := []byte(\"services:\\n  test:\\n    image: nginx\\n\")\n\n\t// writeComposeFile should return an error due to path traversal\n\terr := writeComposeFile(layer, 0, tmpDir, content)\n\tassert.Error(t, err, \"invalid OCI artifact\")\n}\n"
  },
  {
    "path": "pkg/utils/durationutils.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage utils\n\nimport \"time\"\n\nfunc DurationSecondToInt(d *time.Duration) *int {\n\tif d == nil {\n\t\treturn nil\n\t}\n\ttimeout := int(d.Seconds())\n\treturn &timeout\n}\n"
  },
  {
    "path": "pkg/utils/safebuffer.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage utils\n\nimport (\n\t\"bytes\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\n// SafeBuffer is a thread safe version of bytes.Buffer\ntype SafeBuffer struct {\n\tm sync.RWMutex\n\tb bytes.Buffer\n}\n\n// Read is a thread safe version of bytes.Buffer::Read\nfunc (b *SafeBuffer) Read(p []byte) (n int, err error) {\n\tb.m.RLock()\n\tdefer b.m.RUnlock()\n\treturn b.b.Read(p)\n}\n\n// Write is a thread safe version of bytes.Buffer::Write\nfunc (b *SafeBuffer) Write(p []byte) (n int, err error) {\n\tb.m.Lock()\n\tdefer b.m.Unlock()\n\treturn b.b.Write(p)\n}\n\n// String is a thread safe version of bytes.Buffer::String\nfunc (b *SafeBuffer) String() string {\n\tb.m.RLock()\n\tdefer b.m.RUnlock()\n\treturn b.b.String()\n}\n\n// Bytes is a thread safe version of bytes.Buffer::Bytes\nfunc (b *SafeBuffer) Bytes() []byte {\n\tb.m.RLock()\n\tdefer b.m.RUnlock()\n\treturn b.b.Bytes()\n}\n\n// RequireEventuallyContains is a thread safe eventual checker for the buffer content\nfunc (b *SafeBuffer) RequireEventuallyContains(t testing.TB, v string) {\n\tt.Helper()\n\tvar bufContents strings.Builder\n\trequire.Eventuallyf(t, func() bool {\n\t\tb.m.Lock()\n\t\tdefer b.m.Unlock()\n\t\tif _, err := b.b.WriteTo(&bufContents); err != nil {\n\t\t\trequire.FailNowf(t, \"Failed to copy from buffer\",\n\t\t\t\t\"Error: %v\", err)\n\t\t}\n\t\treturn strings.Contains(bufContents.String(), v)\n\t}, 2*time.Second, 20*time.Millisecond,\n\t\t\"Buffer did not contain %q\\n============\\n%s\\n============\",\n\t\tv, &bufContents)\n}\n"
  },
  {
    "path": "pkg/utils/set.go",
    "content": "/*\n\n   Copyright 2020 Docker Compose CLI authors\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n       http://www.apache.org/licenses/LICENSE-2.0\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage utils\n\ntype Set[T comparable] map[T]struct{}\n\nfunc NewSet[T comparable](v ...T) Set[T] {\n\tif len(v) == 0 {\n\t\treturn make(Set[T])\n\t}\n\n\tout := make(Set[T], len(v))\n\tfor i := range v {\n\t\tout.Add(v[i])\n\t}\n\treturn out\n}\n\nfunc (s Set[T]) Has(v T) bool {\n\t_, ok := s[v]\n\treturn ok\n}\n\nfunc (s Set[T]) Add(v T) {\n\ts[v] = struct{}{}\n}\n\nfunc (s Set[T]) AddAll(v ...T) {\n\tfor _, e := range v {\n\t\ts[e] = struct{}{}\n\t}\n}\n\nfunc (s Set[T]) Remove(v T) bool {\n\t_, ok := s[v]\n\tif ok {\n\t\tdelete(s, v)\n\t}\n\treturn ok\n}\n\nfunc (s Set[T]) Clear() {\n\tfor v := range s {\n\t\tdelete(s, v)\n\t}\n}\n\nfunc (s Set[T]) Elements() []T {\n\telements := make([]T, 0, len(s))\n\tfor v := range s {\n\t\telements = append(elements, v)\n\t}\n\treturn elements\n}\n\nfunc (s Set[T]) RemoveAll(elements ...T) {\n\tfor _, e := range elements {\n\t\ts.Remove(e)\n\t}\n}\n\nfunc (s Set[T]) Diff(other Set[T]) Set[T] {\n\tout := make(Set[T])\n\tfor k := range s {\n\t\tif _, ok := other[k]; !ok {\n\t\t\tout[k] = struct{}{}\n\t\t}\n\t}\n\treturn out\n}\n\nfunc (s Set[T]) Union(other Set[T]) Set[T] {\n\tout := make(Set[T])\n\tfor k := range s {\n\t\tout[k] = struct{}{}\n\t}\n\tfor k := range other {\n\t\tout[k] = struct{}{}\n\t}\n\treturn out\n}\n"
  },
  {
    "path": "pkg/utils/set_test.go",
    "content": "/*\n   Copyright 2022 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n       http://www.apache.org/licenses/LICENSE-2.0\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage utils\n\nimport (\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc TestSet_Has(t *testing.T) {\n\tx := NewSet[string](\"value\")\n\trequire.True(t, x.Has(\"value\"))\n\trequire.False(t, x.Has(\"VALUE\"))\n}\n\nfunc TestSet_Diff(t *testing.T) {\n\ta := NewSet[int](1, 2)\n\tb := NewSet[int](2, 3)\n\trequire.ElementsMatch(t, []int{1}, a.Diff(b).Elements())\n\trequire.ElementsMatch(t, []int{3}, b.Diff(a).Elements())\n}\n\nfunc TestSet_Union(t *testing.T) {\n\ta := NewSet[int](1, 2)\n\tb := NewSet[int](2, 3)\n\trequire.ElementsMatch(t, []int{1, 2, 3}, a.Union(b).Elements())\n\trequire.ElementsMatch(t, []int{1, 2, 3}, b.Union(a).Elements())\n}\n"
  },
  {
    "path": "pkg/utils/stringutils.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage utils\n\nimport (\n\t\"strconv\"\n\t\"strings\"\n)\n\n// StringToBool converts a string to a boolean ignoring errors\nfunc StringToBool(s string) bool {\n\ts = strings.ToLower(strings.TrimSpace(s))\n\tif s == \"y\" {\n\t\treturn true\n\t}\n\tb, _ := strconv.ParseBool(s)\n\treturn b\n}\n"
  },
  {
    "path": "pkg/utils/writer.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage utils\n\nimport (\n\t\"bytes\"\n\t\"io\"\n)\n\n// GetWriter creates an io.Writer that will actually split by line and format by LogConsumer\nfunc GetWriter(consumer func(string)) io.WriteCloser {\n\treturn &splitWriter{\n\t\tbuffer:   bytes.Buffer{},\n\t\tconsumer: consumer,\n\t}\n}\n\ntype splitWriter struct {\n\tbuffer   bytes.Buffer\n\tconsumer func(string)\n}\n\n// Write implements io.Writer. joins all input, splits on the separator and yields each chunk\nfunc (s *splitWriter) Write(b []byte) (int, error) {\n\tn, err := s.buffer.Write(b)\n\tif err != nil {\n\t\treturn n, err\n\t}\n\tfor {\n\t\tb = s.buffer.Bytes()\n\t\tindex := bytes.Index(b, []byte{'\\n'})\n\t\tif index < 0 {\n\t\t\tbreak\n\t\t}\n\t\tline := s.buffer.Next(index + 1)\n\t\ts.consumer(string(line[:len(line)-1]))\n\t}\n\treturn n, nil\n}\n\nfunc (s *splitWriter) Close() error {\n\tb := s.buffer.Bytes()\n\tif len(b) == 0 {\n\t\treturn nil\n\t}\n\ts.consumer(string(b))\n\treturn nil\n}\n"
  },
  {
    "path": "pkg/utils/writer_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage utils\n\nimport (\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n)\n\n//nolint:errcheck\nfunc TestSplitWriter(t *testing.T) {\n\tvar lines []string\n\tw := GetWriter(func(line string) {\n\t\tlines = append(lines, line)\n\t})\n\tw.Write([]byte(\"h\"))\n\tw.Write([]byte(\"e\"))\n\tw.Write([]byte(\"l\"))\n\tw.Write([]byte(\"l\"))\n\tw.Write([]byte(\"o\"))\n\tw.Write([]byte(\"\\n\"))\n\tw.Write([]byte(\"world!\\n\"))\n\tassert.DeepEqual(t, lines, []string{\"hello\", \"world!\"})\n}\n"
  },
  {
    "path": "pkg/watch/debounce.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n       http://www.apache.org/licenses/LICENSE-2.0\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\t\"github.com/jonboulle/clockwork\"\n\t\"github.com/sirupsen/logrus\"\n\n\t\"github.com/docker/compose/v5/pkg/utils\"\n)\n\nconst QuietPeriod = 500 * time.Millisecond\n\n// BatchDebounceEvents groups identical file events within a sliding time window and writes the results to the returned\n// channel.\n//\n// The returned channel is closed when the debouncer is stopped via context cancellation or by closing the input channel.\nfunc BatchDebounceEvents(ctx context.Context, clock clockwork.Clock, input <-chan FileEvent) <-chan []FileEvent {\n\tout := make(chan []FileEvent)\n\tgo func() {\n\t\tdefer close(out)\n\t\tseen := utils.Set[FileEvent]{}\n\t\tflushEvents := func() {\n\t\t\tif len(seen) == 0 {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tlogrus.Debugf(\"flush: %d events %s\", len(seen), seen)\n\n\t\t\tevents := make([]FileEvent, 0, len(seen))\n\t\t\tfor e := range seen {\n\t\t\t\tevents = append(events, e)\n\t\t\t}\n\t\t\tout <- events\n\t\t\tseen = utils.Set[FileEvent]{}\n\t\t}\n\n\t\tt := clock.NewTicker(QuietPeriod)\n\t\tdefer t.Stop()\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-ctx.Done():\n\t\t\t\treturn\n\t\t\tcase <-t.Chan():\n\t\t\t\tflushEvents()\n\t\t\tcase e, ok := <-input:\n\t\t\t\tif !ok {\n\t\t\t\t\t// input channel was closed\n\t\t\t\t\tflushEvents()\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tif _, ok := seen[e]; !ok {\n\t\t\t\t\tseen.Add(e)\n\t\t\t\t}\n\t\t\t\tt.Reset(QuietPeriod)\n\t\t\t}\n\t\t}\n\t}()\n\treturn out\n}\n"
  },
  {
    "path": "pkg/watch/debounce_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n       http://www.apache.org/licenses/LICENSE-2.0\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"context\"\n\t\"slices\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/jonboulle/clockwork\"\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc Test_BatchDebounceEvents(t *testing.T) {\n\tch := make(chan FileEvent)\n\tclock := clockwork.NewFakeClock()\n\tctx, stop := context.WithCancel(t.Context())\n\tt.Cleanup(stop)\n\n\teventBatchCh := BatchDebounceEvents(ctx, clock, ch)\n\tfor i := range 100 {\n\t\tpath := \"/a\"\n\t\tif i%2 == 0 {\n\t\t\tpath = \"/b\"\n\t\t}\n\n\t\tch <- FileEvent(path)\n\t}\n\t// we sent 100 events + the debouncer\n\terr := clock.BlockUntilContext(ctx, 101)\n\tassert.NilError(t, err)\n\tclock.Advance(QuietPeriod)\n\tselect {\n\tcase batch := <-eventBatchCh:\n\t\tslices.Sort(batch)\n\t\tassert.Equal(t, len(batch), 2)\n\t\tassert.Equal(t, batch[0], FileEvent(\"/a\"))\n\t\tassert.Equal(t, batch[1], FileEvent(\"/b\"))\n\tcase <-time.After(50 * time.Millisecond):\n\t\tt.Fatal(\"timed out waiting for events\")\n\t}\n\terr = clock.BlockUntilContext(ctx, 1)\n\tassert.NilError(t, err)\n\tclock.Advance(QuietPeriod)\n\n\t// there should only be a single batch\n\tselect {\n\tcase batch := <-eventBatchCh:\n\t\tt.Fatalf(\"unexpected events: %v\", batch)\n\tcase <-time.After(50 * time.Millisecond):\n\t\t// channel is empty\n\t}\n}\n"
  },
  {
    "path": "pkg/watch/dockerignore.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"slices\"\n\t\"strings\"\n\n\t\"github.com/compose-spec/compose-go/v2/types\"\n\t\"github.com/moby/patternmatcher\"\n\t\"github.com/moby/patternmatcher/ignorefile\"\n\n\t\"github.com/docker/compose/v5/internal/paths\"\n)\n\ntype dockerPathMatcher struct {\n\trepoRoot string\n\tmatcher  *patternmatcher.PatternMatcher\n}\n\nfunc (i dockerPathMatcher) Matches(f string) (bool, error) {\n\tif !filepath.IsAbs(f) {\n\t\tf = filepath.Join(i.repoRoot, f)\n\t}\n\treturn i.matcher.MatchesOrParentMatches(f)\n}\n\nfunc (i dockerPathMatcher) MatchesEntireDir(f string) (bool, error) {\n\tmatches, err := i.Matches(f)\n\tif !matches || err != nil {\n\t\treturn matches, err\n\t}\n\n\t// We match the dir, but we might exclude files underneath it.\n\tif i.matcher.Exclusions() {\n\t\tfor _, pattern := range i.matcher.Patterns() {\n\t\t\tif !pattern.Exclusion() {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif paths.IsChild(f, pattern.String()) {\n\t\t\t\t// Found an exclusion match -- we don't match this whole dir\n\t\t\t\treturn false, nil\n\t\t\t}\n\t\t}\n\t\treturn true, nil\n\t}\n\treturn true, nil\n}\n\nfunc LoadDockerIgnore(build *types.BuildConfig) (PathMatcher, error) {\n\tif build == nil {\n\t\treturn EmptyMatcher{}, nil\n\t}\n\trepoRoot := build.Context\n\tabsRoot, err := filepath.Abs(repoRoot)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// first try Dockerfile-specific ignore-file\n\tf, err := os.Open(filepath.Join(repoRoot, build.Dockerfile+\".dockerignore\"))\n\tif os.IsNotExist(err) {\n\t\t// defaults to a global .dockerignore\n\t\tf, err = os.Open(filepath.Join(repoRoot, \".dockerignore\"))\n\t\tif os.IsNotExist(err) {\n\t\t\treturn NewDockerPatternMatcher(repoRoot, nil)\n\t\t}\n\t}\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer func() { _ = f.Close() }()\n\n\tpatterns, err := readDockerignorePatterns(f)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn NewDockerPatternMatcher(absRoot, patterns)\n}\n\n// Make all the patterns use absolute paths.\nfunc absPatterns(absRoot string, patterns []string) []string {\n\tabsPatterns := make([]string, 0, len(patterns))\n\tfor _, p := range patterns {\n\t\t// The pattern parsing here is loosely adapted from fileutils' NewPatternMatcher\n\t\tp = strings.TrimSpace(p)\n\t\tif p == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tp = filepath.Clean(p)\n\n\t\tpPath := p\n\t\tisExclusion := false\n\t\tif p[0] == '!' {\n\t\t\tpPath = p[1:]\n\t\t\tisExclusion = true\n\t\t}\n\n\t\tif !filepath.IsAbs(pPath) {\n\t\t\tpPath = filepath.Join(absRoot, pPath)\n\t\t}\n\t\tabsPattern := pPath\n\t\tif isExclusion {\n\t\t\tabsPattern = fmt.Sprintf(\"!%s\", pPath)\n\t\t}\n\t\tabsPatterns = append(absPatterns, absPattern)\n\t}\n\treturn absPatterns\n}\n\nfunc NewDockerPatternMatcher(repoRoot string, patterns []string) (*dockerPathMatcher, error) {\n\tabsRoot, err := filepath.Abs(repoRoot)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Check if \"*\" is present in patterns\n\thasAllPattern := slices.Contains(patterns, \"*\")\n\tif hasAllPattern {\n\t\t// Remove all non-exclusion patterns (those that don't start with '!')\n\t\tpatterns = slices.DeleteFunc(patterns, func(p string) bool {\n\t\t\treturn p != \"\" && p[0] != '!' // Only keep exclusion patterns\n\t\t})\n\t}\n\n\tpm, err := patternmatcher.New(absPatterns(absRoot, patterns))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &dockerPathMatcher{\n\t\trepoRoot: absRoot,\n\t\tmatcher:  pm,\n\t}, nil\n}\n\nfunc readDockerignorePatterns(r io.Reader) ([]string, error) {\n\tpatterns, err := ignorefile.ReadAll(r)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error reading .dockerignore: %w\", err)\n\t}\n\treturn patterns, nil\n}\n\nfunc DockerIgnoreTesterFromContents(repoRoot string, contents string) (*dockerPathMatcher, error) {\n\tpatterns, err := ignorefile.ReadAll(strings.NewReader(contents))\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error reading .dockerignore: %w\", err)\n\t}\n\n\treturn NewDockerPatternMatcher(repoRoot, patterns)\n}\n"
  },
  {
    "path": "pkg/watch/dockerignore_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"testing\"\n)\n\nfunc TestNewDockerPatternMatcher(t *testing.T) {\n\ttests := []struct {\n\t\tname         string\n\t\trepoRoot     string\n\t\tpatterns     []string\n\t\texpectedErr  bool\n\t\texpectedRoot string\n\t\texpectedPat  []string\n\t}{\n\t\t{\n\t\t\tname:         \"Basic patterns without wildcard\",\n\t\t\trepoRoot:     \"/repo\",\n\t\t\tpatterns:     []string{\"dir1/\", \"file.txt\"},\n\t\t\texpectedErr:  false,\n\t\t\texpectedRoot: \"/repo\",\n\t\t\texpectedPat:  []string{\"/repo/dir1\", \"/repo/file.txt\"},\n\t\t},\n\t\t{\n\t\t\tname:         \"Patterns with exclusion\",\n\t\t\trepoRoot:     \"/repo\",\n\t\t\tpatterns:     []string{\"dir1/\", \"!file.txt\"},\n\t\t\texpectedErr:  false,\n\t\t\texpectedRoot: \"/repo\",\n\t\t\texpectedPat:  []string{\"/repo/dir1\", \"!/repo/file.txt\"},\n\t\t},\n\t\t{\n\t\t\tname:         \"Wildcard with exclusion\",\n\t\t\trepoRoot:     \"/repo\",\n\t\t\tpatterns:     []string{\"*\", \"!file.txt\"},\n\t\t\texpectedErr:  false,\n\t\t\texpectedRoot: \"/repo\",\n\t\t\texpectedPat:  []string{\"!/repo/file.txt\"},\n\t\t},\n\t\t{\n\t\t\tname:         \"No patterns\",\n\t\t\trepoRoot:     \"/repo\",\n\t\t\tpatterns:     []string{},\n\t\t\texpectedErr:  false,\n\t\t\texpectedRoot: \"/repo\",\n\t\t\texpectedPat:  nil,\n\t\t},\n\t\t{\n\t\t\tname:         \"Only exclusion pattern\",\n\t\t\trepoRoot:     \"/repo\",\n\t\t\tpatterns:     []string{\"!file.txt\"},\n\t\t\texpectedErr:  false,\n\t\t\texpectedRoot: \"/repo\",\n\t\t\texpectedPat:  []string{\"!/repo/file.txt\"},\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// Call the function with the test data\n\t\t\tmatcher, err := NewDockerPatternMatcher(tt.repoRoot, tt.patterns)\n\n\t\t\t// Check if we expect an error\n\t\t\tif (err != nil) != tt.expectedErr {\n\t\t\t\tt.Fatalf(\"expected error: %v, got: %v\", tt.expectedErr, err)\n\t\t\t}\n\n\t\t\t// If no error is expected, check the output\n\t\t\tif !tt.expectedErr {\n\t\t\t\tif matcher.repoRoot != tt.expectedRoot {\n\t\t\t\t\tt.Errorf(\"expected root: %v, got: %v\", tt.expectedRoot, matcher.repoRoot)\n\t\t\t\t}\n\n\t\t\t\t// Compare patterns\n\t\t\t\tactualPatterns := matcher.matcher.Patterns()\n\t\t\t\tif len(actualPatterns) != len(tt.expectedPat) {\n\t\t\t\t\tt.Errorf(\"expected patterns length: %v, got: %v\", len(tt.expectedPat), len(actualPatterns))\n\t\t\t\t}\n\n\t\t\t\tfor i, expectedPat := range tt.expectedPat {\n\t\t\t\t\tactualPatternStr := actualPatterns[i].String()\n\t\t\t\t\tif actualPatterns[i].Exclusion() {\n\t\t\t\t\t\tactualPatternStr = \"!\" + actualPatternStr\n\t\t\t\t\t}\n\t\t\t\t\tif actualPatternStr != expectedPat {\n\t\t\t\t\t\tt.Errorf(\"expected pattern: %v, got: %v\", expectedPat, actualPatterns[i])\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "pkg/watch/ephemeral.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\n// EphemeralPathMatcher filters out spurious changes that we don't want to\n// rebuild on, like IDE temp/lock files.\n//\n// This isn't an ideal solution. In an ideal world, the user would put\n// everything to ignore in their tiltignore/dockerignore files. This is a\n// stop-gap so they don't have a terrible experience if those files aren't\n// there or aren't in the right places.\n//\n// NOTE: The underlying `patternmatcher` is NOT always Goroutine-safe, so\n// this is not a singleton; we create an instance for each watcher currently.\nfunc EphemeralPathMatcher() PathMatcher {\n\tgolandPatterns := []string{\"**/*___jb_old___\", \"**/*___jb_tmp___\", \"**/.idea/**\"}\n\temacsPatterns := []string{\"**/.#*\", \"**/#*#\"}\n\t// if .swp is taken (presumably because multiple vims are running in that dir),\n\t// vim will go with .swo, .swn, etc, and then even .svz, .svy!\n\t// https://github.com/vim/vim/blob/ea781459b9617aa47335061fcc78403495260315/src/memline.c#L5076\n\t// ignoring .sw? seems dangerous, since things like .swf or .swi exist, but ignoring the first few\n\t// seems safe and should catch most cases\n\tvimPatterns := []string{\"**/4913\", \"**/*~\", \"**/.*.swp\", \"**/.*.swx\", \"**/.*.swo\", \"**/.*.swn\"}\n\t// kate (the default text editor for KDE) uses a file similar to Vim's .swp\n\t// files, but it doesn't have the \"incrementing\" character problem mentioned\n\t// above\n\tkatePatterns := []string{\"**/.*.kate-swp\"}\n\t// go stdlib creates tmpfiles to determine umask for setting permissions\n\t// during file creation; they are then immediately deleted\n\t// https://github.com/golang/go/blob/0b5218cf4e3e5c17344ea113af346e8e0836f6c4/src/cmd/go/internal/work/exec.go#L1764\n\tgoPatterns := []string{\"**/*-go-tmp-umask\"}\n\n\tvar allPatterns []string\n\tallPatterns = append(allPatterns, golandPatterns...)\n\tallPatterns = append(allPatterns, emacsPatterns...)\n\tallPatterns = append(allPatterns, vimPatterns...)\n\tallPatterns = append(allPatterns, katePatterns...)\n\tallPatterns = append(allPatterns, goPatterns...)\n\n\tmatcher, err := NewDockerPatternMatcher(\"/\", allPatterns)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\treturn matcher\n}\n"
  },
  {
    "path": "pkg/watch/ephemeral_test.go",
    "content": "/*\nCopyright 2023 Docker Compose CLI authors\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n\thttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\npackage watch_test\n\nimport (\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/require\"\n\n\t\"github.com/docker/compose/v5/pkg/watch\"\n)\n\nfunc TestEphemeralPathMatcher(t *testing.T) {\n\tignored := []string{\n\t\t\".file.txt.swp\",\n\t\t\"/path/file.txt~\",\n\t\t\"/home/moby/proj/.idea/modules.xml\",\n\t\t\".#file.txt\",\n\t\t\"#file.txt#\",\n\t\t\"/dir/.file.txt.kate-swp\",\n\t\t\"/go/app/1234-go-tmp-umask\",\n\t}\n\tmatcher := watch.EphemeralPathMatcher()\n\tfor _, p := range ignored {\n\t\tok, err := matcher.Matches(p)\n\t\trequire.NoErrorf(t, err, \"Matching %s\", p)\n\t\tassert.Truef(t, ok, \"Path %s should have matched\", p)\n\t}\n\n\tconst includedPath = \"normal.txt\"\n\tok, err := matcher.Matches(includedPath)\n\trequire.NoErrorf(t, err, \"Matching %s\", includedPath)\n\tassert.Falsef(t, ok, \"Path %s should NOT have matched\", includedPath)\n}\n"
  },
  {
    "path": "pkg/watch/notify.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"expvar\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strconv\"\n)\n\nvar numberOfWatches = expvar.NewInt(\"watch.naive.numberOfWatches\")\n\ntype FileEvent string\n\nfunc NewFileEvent(p string) FileEvent {\n\tif !filepath.IsAbs(p) {\n\t\tpanic(fmt.Sprintf(\"NewFileEvent only accepts absolute paths. Actual: %s\", p))\n\t}\n\treturn FileEvent(p)\n}\n\ntype Notify interface {\n\t// Start watching the paths set at init time\n\tStart() error\n\n\t// Stop watching and close all channels\n\tClose() error\n\n\t// A channel to read off incoming file changes\n\tEvents() chan FileEvent\n\n\t// A channel to read off show-stopping errors\n\tErrors() chan error\n}\n\n// When we specify directories to watch, we often want to\n// ignore some subset of the files under those directories.\n//\n// For example:\n// - Watch /src/repo, but ignore /src/repo/.git\n// - Watch /src/repo, but ignore everything in /src/repo/bazel-bin except /src/repo/bazel-bin/app-binary\n//\n// The PathMatcher interface helps us manage these ignores.\ntype PathMatcher interface {\n\tMatches(file string) (bool, error)\n\n\t// If this matches the entire dir, we can often optimize filetree walks a bit.\n\tMatchesEntireDir(file string) (bool, error)\n}\n\n// AnyMatcher is a PathMatcher to match any path\ntype AnyMatcher struct{}\n\nfunc (AnyMatcher) Matches(f string) (bool, error)          { return true, nil }\nfunc (AnyMatcher) MatchesEntireDir(f string) (bool, error) { return true, nil }\n\nvar _ PathMatcher = AnyMatcher{}\n\n// EmptyMatcher is a PathMatcher to match no path\ntype EmptyMatcher struct{}\n\nfunc (EmptyMatcher) Matches(f string) (bool, error)          { return false, nil }\nfunc (EmptyMatcher) MatchesEntireDir(f string) (bool, error) { return false, nil }\n\nvar _ PathMatcher = EmptyMatcher{}\n\nfunc NewWatcher(paths []string) (Notify, error) {\n\treturn newWatcher(paths)\n}\n\nconst WindowsBufferSizeEnvVar = \"COMPOSE_WATCH_WINDOWS_BUFFER_SIZE\"\n\nconst defaultBufferSize int = 65536\n\nfunc DesiredWindowsBufferSize() int {\n\tenvVar := os.Getenv(WindowsBufferSizeEnvVar)\n\tif envVar != \"\" {\n\t\tsize, err := strconv.Atoi(envVar)\n\t\tif err == nil {\n\t\t\treturn size\n\t\t}\n\t}\n\treturn defaultBufferSize\n}\n\ntype CompositePathMatcher struct {\n\tMatchers []PathMatcher\n}\n\nfunc NewCompositeMatcher(matchers ...PathMatcher) PathMatcher {\n\tif len(matchers) == 0 {\n\t\treturn EmptyMatcher{}\n\t}\n\treturn CompositePathMatcher{Matchers: matchers}\n}\n\nfunc (c CompositePathMatcher) Matches(f string) (bool, error) {\n\tfor _, t := range c.Matchers {\n\t\tret, err := t.Matches(f)\n\t\tif err != nil {\n\t\t\treturn false, err\n\t\t}\n\t\tif ret {\n\t\t\treturn true, nil\n\t\t}\n\t}\n\treturn false, nil\n}\n\nfunc (c CompositePathMatcher) MatchesEntireDir(f string) (bool, error) {\n\tfor _, t := range c.Matchers {\n\t\tmatches, err := t.MatchesEntireDir(f)\n\t\tif matches || err != nil {\n\t\t\treturn matches, err\n\t\t}\n\t}\n\treturn false, nil\n}\n\nvar _ PathMatcher = CompositePathMatcher{}\n"
  },
  {
    "path": "pkg/watch/notify_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/require\"\n)\n\n// Each implementation of the notify interface should have the same basic\n// behavior.\n\nfunc TestWindowsBufferSize(t *testing.T) {\n\tt.Run(\"empty value\", func(t *testing.T) {\n\t\tt.Setenv(WindowsBufferSizeEnvVar, \"\")\n\t\tassert.Equal(t, defaultBufferSize, DesiredWindowsBufferSize())\n\t})\n\n\tt.Run(\"invalid value\", func(t *testing.T) {\n\t\tt.Setenv(WindowsBufferSizeEnvVar, \"a\")\n\t\tassert.Equal(t, defaultBufferSize, DesiredWindowsBufferSize())\n\t})\n\n\tt.Run(\"valid value\", func(t *testing.T) {\n\t\tt.Setenv(WindowsBufferSizeEnvVar, \"10\")\n\t\tassert.Equal(t, 10, DesiredWindowsBufferSize())\n\t})\n}\n\nfunc TestNoEvents(t *testing.T) {\n\tf := newNotifyFixture(t)\n\tf.assertEvents()\n}\n\nfunc TestNoWatches(t *testing.T) {\n\tf := newNotifyFixture(t)\n\tf.paths = nil\n\tf.rebuildWatcher()\n\tf.assertEvents()\n}\n\nfunc TestEventOrdering(t *testing.T) {\n\tif runtime.GOOS == \"windows\" {\n\t\t// https://qualapps.blogspot.com/2010/05/understanding-readdirectorychangesw_19.html\n\t\tt.Skip(\"Windows doesn't make great guarantees about duplicate/out-of-order events\")\n\t\treturn\n\t}\n\tf := newNotifyFixture(t)\n\n\tcount := 8\n\tdirs := make([]string, count)\n\tfor i := range dirs {\n\t\tdir := f.TempDir(\"watched\")\n\t\tdirs[i] = dir\n\t\tf.watch(dir)\n\t}\n\n\tf.fsync()\n\tf.events = nil\n\n\tvar expected []string\n\tfor i, dir := range dirs {\n\t\tbase := fmt.Sprintf(\"%d.txt\", i)\n\t\tp := filepath.Join(dir, base)\n\t\terr := os.WriteFile(p, []byte(base), os.FileMode(0o777))\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\texpected = append(expected, filepath.Join(dir, base))\n\t}\n\n\tf.assertEvents(expected...)\n}\n\n// Simulate a git branch switch that creates a bunch\n// of directories, creates files in them, then deletes\n// them all quickly. Make sure there are no errors.\nfunc TestGitBranchSwitch(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\tcount := 10\n\tdirs := make([]string, count)\n\tfor i := range dirs {\n\t\tdir := f.TempDir(\"watched\")\n\t\tdirs[i] = dir\n\t\tf.watch(dir)\n\t}\n\n\tf.fsync()\n\tf.events = nil\n\n\t// consume all the events in the background\n\tctx, cancel := context.WithCancel(t.Context())\n\tdone := f.consumeEventsInBackground(ctx)\n\n\tfor i, dir := range dirs {\n\t\tfor j := range count {\n\t\t\tbase := fmt.Sprintf(\"x/y/dir-%d/x.txt\", j)\n\t\t\tp := filepath.Join(dir, base)\n\t\t\tf.WriteFile(p, \"contents\")\n\t\t}\n\n\t\tif i != 0 {\n\t\t\terr := os.RemoveAll(dir)\n\t\t\trequire.NoError(t, err)\n\t\t}\n\t}\n\n\tcancel()\n\terr := <-done\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tf.fsync()\n\tf.events = nil\n\n\t// Make sure the watch on the first dir still works.\n\tdir := dirs[0]\n\tpath := filepath.Join(dir, \"change\")\n\n\tf.WriteFile(path, \"hello\\n\")\n\tf.fsync()\n\n\tf.assertEvents(path)\n\n\t// Make sure there are no errors in the out stream\n\tassert.Empty(t, f.out.String())\n}\n\nfunc TestWatchesAreRecursive(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\troot := f.TempDir(\"root\")\n\n\t// add a sub directory\n\tsubPath := filepath.Join(root, \"sub\")\n\tf.MkdirAll(subPath)\n\n\t// watch parent\n\tf.watch(root)\n\n\tf.fsync()\n\tf.events = nil\n\t// change sub directory\n\tchangeFilePath := filepath.Join(subPath, \"change\")\n\tf.WriteFile(changeFilePath, \"change\")\n\n\tf.assertEvents(changeFilePath)\n}\n\nfunc TestNewDirectoriesAreRecursivelyWatched(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\troot := f.TempDir(\"root\")\n\n\t// watch parent\n\tf.watch(root)\n\tf.fsync()\n\tf.events = nil\n\n\t// add a sub directory\n\tsubPath := filepath.Join(root, \"sub\")\n\tf.MkdirAll(subPath)\n\n\t// change something inside sub directory\n\tchangeFilePath := filepath.Join(subPath, \"change\")\n\tfile, err := os.OpenFile(changeFilePath, os.O_RDONLY|os.O_CREATE, 0o666)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\t_ = file.Close()\n\tf.assertEvents(subPath, changeFilePath)\n}\n\nfunc TestWatchNonExistentPath(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\troot := f.TempDir(\"root\")\n\tpath := filepath.Join(root, \"change\")\n\n\tf.watch(path)\n\tf.fsync()\n\n\td1 := \"hello\\ngo\\n\"\n\tf.WriteFile(path, d1)\n\tf.assertEvents(path)\n}\n\nfunc TestWatchNonExistentPathDoesNotFireSiblingEvent(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\troot := f.TempDir(\"root\")\n\twatchedFile := filepath.Join(root, \"a.txt\")\n\tunwatchedSibling := filepath.Join(root, \"b.txt\")\n\n\tf.watch(watchedFile)\n\tf.fsync()\n\n\td1 := \"hello\\ngo\\n\"\n\tf.WriteFile(unwatchedSibling, d1)\n\tf.assertEvents()\n}\n\nfunc TestRemove(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\troot := f.TempDir(\"root\")\n\tpath := filepath.Join(root, \"change\")\n\n\td1 := \"hello\\ngo\\n\"\n\tf.WriteFile(path, d1)\n\n\tf.watch(path)\n\tf.fsync()\n\tf.events = nil\n\terr := os.Remove(path)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tf.assertEvents(path)\n}\n\nfunc TestRemoveAndAddBack(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\tpath := filepath.Join(f.paths[0], \"change\")\n\n\td1 := []byte(\"hello\\ngo\\n\")\n\terr := os.WriteFile(path, d1, 0o644)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tf.watch(path)\n\tf.assertEvents(path)\n\n\terr = os.Remove(path)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tf.assertEvents(path)\n\tf.events = nil\n\n\terr = os.WriteFile(path, d1, 0o644)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tf.assertEvents(path)\n}\n\nfunc TestSingleFile(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\troot := f.TempDir(\"root\")\n\tpath := filepath.Join(root, \"change\")\n\n\td1 := \"hello\\ngo\\n\"\n\tf.WriteFile(path, d1)\n\n\tf.watch(path)\n\tf.fsync()\n\n\td2 := []byte(\"hello\\nworld\\n\")\n\terr := os.WriteFile(path, d2, 0o644)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tf.assertEvents(path)\n}\n\nfunc TestWriteBrokenLink(t *testing.T) {\n\tif runtime.GOOS == \"windows\" {\n\t\tt.Skip(\"no user-space symlinks on windows\")\n\t}\n\tf := newNotifyFixture(t)\n\n\tlink := filepath.Join(f.paths[0], \"brokenLink\")\n\tmissingFile := filepath.Join(f.paths[0], \"missingFile\")\n\terr := os.Symlink(missingFile, link)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tf.assertEvents(link)\n}\n\nfunc TestWriteGoodLink(t *testing.T) {\n\tif runtime.GOOS == \"windows\" {\n\t\tt.Skip(\"no user-space symlinks on windows\")\n\t}\n\tf := newNotifyFixture(t)\n\n\tgoodFile := filepath.Join(f.paths[0], \"goodFile\")\n\terr := os.WriteFile(goodFile, []byte(\"hello\"), 0o644)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tlink := filepath.Join(f.paths[0], \"goodFileSymlink\")\n\terr = os.Symlink(goodFile, link)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tf.assertEvents(goodFile, link)\n}\n\nfunc TestWatchBrokenLink(t *testing.T) {\n\tif runtime.GOOS == \"windows\" {\n\t\tt.Skip(\"no user-space symlinks on windows\")\n\t}\n\tf := newNotifyFixture(t)\n\n\tnewRoot, err := NewDir(t.Name())\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer func() {\n\t\terr := newRoot.TearDown()\n\t\tif err != nil {\n\t\t\tfmt.Printf(\"error tearing down temp dir: %v\\n\", err)\n\t\t}\n\t}()\n\n\tlink := filepath.Join(newRoot.Path(), \"brokenLink\")\n\tmissingFile := filepath.Join(newRoot.Path(), \"missingFile\")\n\terr = os.Symlink(missingFile, link)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tf.watch(newRoot.Path())\n\terr = os.Remove(link)\n\trequire.NoError(t, err)\n\tf.assertEvents(link)\n}\n\nfunc TestMoveAndReplace(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\troot := f.TempDir(\"root\")\n\tfile := filepath.Join(root, \"myfile\")\n\tf.WriteFile(file, \"hello\")\n\n\tf.watch(file)\n\ttmpFile := filepath.Join(root, \".myfile.swp\")\n\tf.WriteFile(tmpFile, \"world\")\n\n\terr := os.Rename(tmpFile, file)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tf.assertEvents(file)\n}\n\nfunc TestWatchBothDirAndFile(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\tdir := f.JoinPath(\"foo\")\n\tfileA := f.JoinPath(\"foo\", \"a\")\n\tfileB := f.JoinPath(\"foo\", \"b\")\n\tf.WriteFile(fileA, \"a\")\n\tf.WriteFile(fileB, \"b\")\n\n\tf.watch(fileA)\n\tf.watch(dir)\n\tf.fsync()\n\tf.events = nil\n\n\tf.WriteFile(fileB, \"b-new\")\n\tf.assertEvents(fileB)\n}\n\nfunc TestWatchNonexistentFileInNonexistentDirectoryCreatedSimultaneously(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\troot := f.JoinPath(\"root\")\n\terr := os.Mkdir(root, 0o777)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tfile := f.JoinPath(\"root\", \"parent\", \"a\")\n\n\tf.watch(file)\n\tf.fsync()\n\tf.events = nil\n\tf.WriteFile(file, \"hello\")\n\tf.assertEvents(file)\n}\n\nfunc TestWatchNonexistentDirectory(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\troot := f.JoinPath(\"root\")\n\terr := os.Mkdir(root, 0o777)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tparent := f.JoinPath(\"parent\")\n\tfile := f.JoinPath(\"parent\", \"a\")\n\n\tf.watch(parent)\n\tf.fsync()\n\tf.events = nil\n\n\terr = os.Mkdir(parent, 0o777)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// for directories that were the root of an Add, we don't report creation, cf. watcher_darwin.go\n\tf.assertEvents()\n\n\tf.events = nil\n\tf.WriteFile(file, \"hello\")\n\n\tf.assertEvents(file)\n}\n\nfunc TestWatchNonexistentFileInNonexistentDirectory(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\troot := f.JoinPath(\"root\")\n\terr := os.Mkdir(root, 0o777)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tparent := f.JoinPath(\"parent\")\n\tfile := f.JoinPath(\"parent\", \"a\")\n\n\tf.watch(file)\n\tf.assertEvents()\n\n\terr = os.Mkdir(parent, 0o777)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tf.assertEvents()\n\tf.WriteFile(file, \"hello\")\n\tf.assertEvents(file)\n}\n\nfunc TestWatchCountInnerFile(t *testing.T) {\n\tf := newNotifyFixture(t)\n\n\troot := f.paths[0]\n\ta := f.JoinPath(root, \"a\")\n\tb := f.JoinPath(a, \"b\")\n\tfile := f.JoinPath(b, \"bigFile\")\n\tf.WriteFile(file, \"hello\")\n\tf.assertEvents(a, b, file)\n\n\texpectedWatches := 3\n\tif isRecursiveWatcher() {\n\t\texpectedWatches = 1\n\t}\n\tassert.Equal(t, expectedWatches, int(numberOfWatches.Value()))\n}\n\nfunc isRecursiveWatcher() bool {\n\treturn runtime.GOOS == \"darwin\" || runtime.GOOS == \"windows\"\n}\n\ntype notifyFixture struct {\n\tctx    context.Context\n\tcancel func()\n\tout    *bytes.Buffer\n\t*TempDirFixture\n\tnotify Notify\n\tpaths  []string\n\tevents []FileEvent\n}\n\nfunc newNotifyFixture(t *testing.T) *notifyFixture {\n\tout := bytes.NewBuffer(nil)\n\tctx, cancel := context.WithCancel(t.Context())\n\tnf := &notifyFixture{\n\t\tctx:            ctx,\n\t\tcancel:         cancel,\n\t\tTempDirFixture: NewTempDirFixture(t),\n\t\tpaths:          []string{},\n\t\tout:            out,\n\t}\n\tnf.watch(nf.TempDir(\"watched\"))\n\tt.Cleanup(nf.tearDown)\n\treturn nf\n}\n\nfunc (f *notifyFixture) watch(path string) {\n\tf.paths = append(f.paths, path)\n\tf.rebuildWatcher()\n}\n\nfunc (f *notifyFixture) rebuildWatcher() {\n\t// sync any outstanding events and close the old watcher\n\tif f.notify != nil {\n\t\tf.fsync()\n\t\tf.closeWatcher()\n\t}\n\n\t// create a new watcher\n\tnotify, err := NewWatcher(f.paths)\n\tif err != nil {\n\t\tf.T().Fatal(err)\n\t}\n\tf.notify = notify\n\terr = f.notify.Start()\n\tif err != nil {\n\t\tf.T().Fatal(err)\n\t}\n}\n\nfunc (f *notifyFixture) assertEvents(expected ...string) {\n\tf.fsync()\n\tif runtime.GOOS == \"windows\" {\n\t\t// NOTE(nick): It's unclear to me why an extra fsync() helps\n\t\t// here, but it makes the I/O way more predictable.\n\t\tf.fsync()\n\t}\n\n\tif len(f.events) != len(expected) {\n\t\tf.T().Fatalf(\"Got %d events (expected %d): %v %v\", len(f.events), len(expected), f.events, expected)\n\t}\n\n\tfor i, actual := range f.events {\n\t\te := FileEvent(expected[i])\n\t\tif actual != e {\n\t\t\tf.T().Fatalf(\"Got event %v (expected %v)\", actual, e)\n\t\t}\n\t}\n}\n\nfunc (f *notifyFixture) consumeEventsInBackground(ctx context.Context) chan error {\n\tdone := make(chan error)\n\tgo func() {\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-f.ctx.Done():\n\t\t\t\tclose(done)\n\t\t\t\treturn\n\t\t\tcase <-ctx.Done():\n\t\t\t\tclose(done)\n\t\t\t\treturn\n\t\t\tcase err := <-f.notify.Errors():\n\t\t\t\tdone <- err\n\t\t\t\tclose(done)\n\t\t\t\treturn\n\t\t\tcase <-f.notify.Events():\n\t\t\t}\n\t\t}\n\t}()\n\treturn done\n}\n\nfunc (f *notifyFixture) fsync() {\n\tf.fsyncWithRetryCount(3)\n}\n\nfunc (f *notifyFixture) fsyncWithRetryCount(retryCount int) {\n\tif len(f.paths) == 0 {\n\t\treturn\n\t}\n\n\tsyncPathBase := fmt.Sprintf(\"sync-%d.txt\", time.Now().UnixNano())\n\tsyncPath := filepath.Join(f.paths[0], syncPathBase)\n\tanySyncPath := filepath.Join(f.paths[0], \"sync-\")\n\ttimeout := time.After(250 * time.Second)\n\n\tf.WriteFile(syncPath, time.Now().String())\n\nF:\n\tfor {\n\t\tselect {\n\t\tcase <-f.ctx.Done():\n\t\t\treturn\n\t\tcase err := <-f.notify.Errors():\n\t\t\tf.T().Fatal(err)\n\n\t\tcase event := <-f.notify.Events():\n\t\t\tif strings.Contains(string(event), syncPath) {\n\t\t\t\tbreak F\n\t\t\t}\n\t\t\tif strings.Contains(string(event), anySyncPath) {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// Don't bother tracking duplicate changes to the same path\n\t\t\t// for testing.\n\t\t\tif len(f.events) > 0 && f.events[len(f.events)-1] == event {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tf.events = append(f.events, event)\n\n\t\tcase <-timeout:\n\t\t\tif retryCount <= 0 {\n\t\t\t\tf.T().Fatalf(\"fsync: timeout\")\n\t\t\t} else {\n\t\t\t\tf.fsyncWithRetryCount(retryCount - 1)\n\t\t\t}\n\t\t\treturn\n\t\t}\n\t}\n}\n\nfunc (f *notifyFixture) closeWatcher() {\n\tnotify := f.notify\n\terr := notify.Close()\n\tif err != nil {\n\t\tf.T().Fatal(err)\n\t}\n\n\t// drain channels from watcher\n\tgo func() {\n\t\tfor range notify.Events() {\n\t\t}\n\t}()\n\n\tgo func() {\n\t\tfor range notify.Errors() {\n\t\t}\n\t}()\n}\n\nfunc (f *notifyFixture) tearDown() {\n\tf.cancel()\n\tf.closeWatcher()\n\tnumberOfWatches.Set(0)\n}\n"
  },
  {
    "path": "pkg/watch/paths.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n)\n\nfunc greatestExistingAncestor(path string) (string, error) {\n\tif path == string(filepath.Separator) ||\n\t\tpath == fmt.Sprintf(\"%s%s\", filepath.VolumeName(path), string(filepath.Separator)) {\n\t\treturn \"\", fmt.Errorf(\"cannot watch root directory\")\n\t}\n\n\t_, err := os.Stat(path)\n\tif err != nil && !os.IsNotExist(err) {\n\t\treturn \"\", fmt.Errorf(\"os.Stat(%q): %w\", path, err)\n\t}\n\n\tif os.IsNotExist(err) {\n\t\treturn greatestExistingAncestor(filepath.Dir(path))\n\t}\n\n\treturn path, nil\n}\n"
  },
  {
    "path": "pkg/watch/paths_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"runtime\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc TestGreatestExistingAncestor(t *testing.T) {\n\tf := NewTempDirFixture(t)\n\n\tp, err := greatestExistingAncestor(f.Path())\n\trequire.NoError(t, err)\n\tassert.Equal(t, f.Path(), p)\n\n\tp, err = greatestExistingAncestor(f.JoinPath(\"missing\"))\n\trequire.NoError(t, err)\n\tassert.Equal(t, f.Path(), p)\n\n\tmissingTopLevel := \"/missingDir/a/b/c\"\n\tif runtime.GOOS == \"windows\" {\n\t\tmissingTopLevel = \"C:\\\\missingDir\\\\a\\\\b\\\\c\"\n\t}\n\t_, err = greatestExistingAncestor(missingTopLevel)\n\tassert.Contains(t, err.Error(), \"cannot watch root directory\")\n}\n"
  },
  {
    "path": "pkg/watch/temp.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n)\n\n// TempDir holds a temp directory and allows easy access to new temp directories.\ntype TempDir struct {\n\tdir string\n}\n\n// NewDir creates a new TempDir in the default location (typically $TMPDIR)\nfunc NewDir(prefix string) (*TempDir, error) {\n\treturn NewDirAtRoot(\"\", prefix)\n}\n\n// NewDirAtRoot creates a new TempDir at the given root.\nfunc NewDirAtRoot(root, prefix string) (*TempDir, error) {\n\ttmpDir, err := os.MkdirTemp(root, prefix)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\trealTmpDir, err := filepath.EvalSymlinks(tmpDir)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &TempDir{dir: realTmpDir}, nil\n}\n\n// NewDirAtSlashTmp creates a new TempDir at /tmp\nfunc NewDirAtSlashTmp(prefix string) (*TempDir, error) {\n\tfullyResolvedPath, err := filepath.EvalSymlinks(\"/tmp\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn NewDirAtRoot(fullyResolvedPath, prefix)\n}\n\n// d.NewDir creates a new TempDir under d\nfunc (d *TempDir) NewDir(prefix string) (*TempDir, error) {\n\td2, err := os.MkdirTemp(d.dir, prefix)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn &TempDir{d2}, nil\n}\n\nfunc (d *TempDir) NewDeterministicDir(name string) (*TempDir, error) {\n\td2 := filepath.Join(d.dir, name)\n\terr := os.Mkdir(d2, 0o700)\n\tif os.IsExist(err) {\n\t\treturn nil, err\n\t} else if err != nil {\n\t\treturn nil, err\n\t}\n\treturn &TempDir{d2}, nil\n}\n\nfunc (d *TempDir) TearDown() error {\n\treturn os.RemoveAll(d.dir)\n}\n\nfunc (d *TempDir) Path() string {\n\treturn d.dir\n}\n\n// Possible extensions:\n// temp file\n// named directories or files (e.g., we know we want one git repo for our object, but it should be temporary)\n"
  },
  {
    "path": "pkg/watch/temp_dir_fixture.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"regexp\"\n\t\"runtime\"\n\t\"strings\"\n\t\"testing\"\n)\n\ntype TempDirFixture struct {\n\tt      testing.TB\n\tdir    *TempDir\n\toldDir string\n}\n\n// everything not listed in this character class will get replaced by _, so that it's a safe filename\nvar sanitizeForFilenameRe = regexp.MustCompile(\"[^a-zA-Z0-9.]\")\n\nfunc SanitizeFileName(name string) string {\n\treturn sanitizeForFilenameRe.ReplaceAllString(name, \"_\")\n}\n\nfunc NewTempDirFixture(t testing.TB) *TempDirFixture {\n\tdir, err := NewDir(SanitizeFileName(t.Name()))\n\tif err != nil {\n\t\tt.Fatalf(\"Error making temp dir: %v\", err)\n\t}\n\n\tret := &TempDirFixture{\n\t\tt:   t,\n\t\tdir: dir,\n\t}\n\n\tt.Cleanup(ret.tearDown)\n\n\treturn ret\n}\n\nfunc (f *TempDirFixture) T() testing.TB {\n\treturn f.t\n}\n\nfunc (f *TempDirFixture) Path() string {\n\treturn f.dir.Path()\n}\n\nfunc (f *TempDirFixture) Chdir() {\n\tcwd, err := os.Getwd()\n\tif err != nil {\n\t\tf.t.Fatal(err)\n\t}\n\n\tf.oldDir = cwd\n\n\terr = os.Chdir(f.Path())\n\tif err != nil {\n\t\tf.t.Fatal(err)\n\t}\n}\n\nfunc (f *TempDirFixture) JoinPath(path ...string) string {\n\tp := []string{}\n\tisAbs := len(path) > 0 && filepath.IsAbs(path[0])\n\tif isAbs {\n\t\tif !strings.HasPrefix(path[0], f.Path()) {\n\t\t\tf.t.Fatalf(\"Path outside fixture tempdir are forbidden: %s\", path[0])\n\t\t}\n\t} else {\n\t\tp = append(p, f.Path())\n\t}\n\n\tp = append(p, path...)\n\treturn filepath.Join(p...)\n}\n\nfunc (f *TempDirFixture) JoinPaths(paths []string) []string {\n\tjoined := make([]string, len(paths))\n\tfor i, p := range paths {\n\t\tjoined[i] = f.JoinPath(p)\n\t}\n\treturn joined\n}\n\n// Returns the full path to the file written.\nfunc (f *TempDirFixture) WriteFile(path string, contents string) string {\n\tfullPath := f.JoinPath(path)\n\tbase := filepath.Dir(fullPath)\n\terr := os.MkdirAll(base, os.FileMode(0o777))\n\tif err != nil {\n\t\tf.t.Fatal(err)\n\t}\n\terr = os.WriteFile(fullPath, []byte(contents), os.FileMode(0o777))\n\tif err != nil {\n\t\tf.t.Fatal(err)\n\t}\n\treturn fullPath\n}\n\n// Returns the full path to the file written.\nfunc (f *TempDirFixture) CopyFile(originalPath, newPath string) {\n\tcontents, err := os.ReadFile(originalPath)\n\tif err != nil {\n\t\tf.t.Fatal(err)\n\t}\n\tf.WriteFile(newPath, string(contents))\n}\n\n// Read the file.\nfunc (f *TempDirFixture) ReadFile(path string) string {\n\tfullPath := f.JoinPath(path)\n\tcontents, err := os.ReadFile(fullPath)\n\tif err != nil {\n\t\tf.t.Fatal(err)\n\t}\n\treturn string(contents)\n}\n\nfunc (f *TempDirFixture) WriteSymlink(linkContents, destPath string) {\n\tfullDestPath := f.JoinPath(destPath)\n\terr := os.MkdirAll(filepath.Dir(fullDestPath), os.FileMode(0o777))\n\tif err != nil {\n\t\tf.t.Fatal(err)\n\t}\n\terr = os.Symlink(linkContents, fullDestPath)\n\tif err != nil {\n\t\tf.t.Fatal(err)\n\t}\n}\n\nfunc (f *TempDirFixture) MkdirAll(path string) {\n\tfullPath := f.JoinPath(path)\n\terr := os.MkdirAll(fullPath, os.FileMode(0o777))\n\tif err != nil {\n\t\tf.t.Fatal(err)\n\t}\n}\n\nfunc (f *TempDirFixture) TouchFiles(paths []string) {\n\tfor _, p := range paths {\n\t\tf.WriteFile(p, \"\")\n\t}\n}\n\nfunc (f *TempDirFixture) Rm(pathInRepo string) {\n\tfullPath := f.JoinPath(pathInRepo)\n\terr := os.RemoveAll(fullPath)\n\tif err != nil {\n\t\tf.t.Fatal(err)\n\t}\n}\n\nfunc (f *TempDirFixture) NewFile(prefix string) (*os.File, error) {\n\treturn os.CreateTemp(f.dir.Path(), prefix)\n}\n\nfunc (f *TempDirFixture) TempDir(prefix string) string {\n\tname, err := os.MkdirTemp(f.dir.Path(), prefix)\n\tif err != nil {\n\t\tf.t.Fatal(err)\n\t}\n\treturn name\n}\n\nfunc (f *TempDirFixture) tearDown() {\n\tif f.oldDir != \"\" {\n\t\terr := os.Chdir(f.oldDir)\n\t\tif err != nil {\n\t\t\tf.t.Fatal(err)\n\t\t}\n\t}\n\n\terr := f.dir.TearDown()\n\tif err != nil && runtime.GOOS == \"windows\" &&\n\t\t(strings.Contains(err.Error(), \"The process cannot access the file\") ||\n\t\t\tstrings.Contains(err.Error(), \"Access is denied\")) {\n\t\t// NOTE(nick): I'm not convinced that this is a real problem.\n\t\t// I think it might just be clean up of file notification I/O.\n\t} else if err != nil {\n\t\tf.t.Fatal(err)\n\t}\n}\n"
  },
  {
    "path": "pkg/watch/watcher_darwin.go",
    "content": "//go:build fsnotify\n\n/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/fsnotify/fsevents\"\n\n\tpathutil \"github.com/docker/compose/v5/internal/paths\"\n)\n\n// A file watcher optimized for Darwin.\n// Uses FSEvents to avoid the terrible perf characteristics of kqueue. Requires CGO\ntype fseventNotify struct {\n\tstream *fsevents.EventStream\n\tevents chan FileEvent\n\terrors chan error\n\tstop   chan struct{}\n\n\tpathsWereWatching map[string]any\n\tcloseOnce         sync.Once\n}\n\nfunc (d *fseventNotify) loop() {\n\tfor {\n\t\tselect {\n\t\tcase <-d.stop:\n\t\t\treturn\n\t\tcase events, ok := <-d.stream.Events:\n\t\t\tif !ok {\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tfor _, e := range events {\n\t\t\t\te.Path = filepath.Join(string(os.PathSeparator), e.Path)\n\n\t\t\t\t_, isPathWereWatching := d.pathsWereWatching[e.Path]\n\t\t\t\tif e.Flags&fsevents.ItemIsDir == fsevents.ItemIsDir && e.Flags&fsevents.ItemCreated == fsevents.ItemCreated && isPathWereWatching {\n\t\t\t\t\t// This is the first create for the path that we're watching. We always get exactly one of these\n\t\t\t\t\t// even after we get the HistoryDone event. Skip it.\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\n\t\t\t\td.events <- NewFileEvent(e.Path)\n\t\t\t}\n\t\t}\n\t}\n}\n\n// Add a path to be watched. Should only be called during initialization.\nfunc (d *fseventNotify) initAdd(name string) {\n\td.stream.Paths = append(d.stream.Paths, name)\n\n\tif d.pathsWereWatching == nil {\n\t\td.pathsWereWatching = make(map[string]any)\n\t}\n\td.pathsWereWatching[name] = struct{}{}\n}\n\nfunc (d *fseventNotify) Start() error {\n\tif len(d.stream.Paths) == 0 {\n\t\treturn nil\n\t}\n\n\td.closeOnce = sync.Once{}\n\n\tnumberOfWatches.Add(int64(len(d.stream.Paths)))\n\n\terr := d.stream.Start()\n\tif err != nil {\n\t\treturn err\n\t}\n\tgo d.loop()\n\treturn nil\n}\n\nfunc (d *fseventNotify) Close() error {\n\td.closeOnce.Do(func() {\n\t\tnumberOfWatches.Add(int64(-len(d.stream.Paths)))\n\n\t\td.stream.Stop()\n\t\tclose(d.errors)\n\t\tclose(d.stop)\n\t})\n\n\treturn nil\n}\n\nfunc (d *fseventNotify) Events() chan FileEvent {\n\treturn d.events\n}\n\nfunc (d *fseventNotify) Errors() chan error {\n\treturn d.errors\n}\n\nfunc newWatcher(paths []string) (Notify, error) {\n\tdw := &fseventNotify{\n\t\tstream: &fsevents.EventStream{\n\t\t\tLatency: 50 * time.Millisecond,\n\t\t\tFlags:   fsevents.FileEvents | fsevents.IgnoreSelf,\n\t\t\t// NOTE(dmiller): this corresponds to the `sinceWhen` parameter in FSEventStreamCreate\n\t\t\t// https://developer.apple.com/documentation/coreservices/1443980-fseventstreamcreate\n\t\t\tEventID: fsevents.LatestEventID(),\n\t\t},\n\t\tevents: make(chan FileEvent),\n\t\terrors: make(chan error),\n\t\tstop:   make(chan struct{}),\n\t}\n\n\tpaths = pathutil.EncompassingPaths(paths)\n\tfor _, path := range paths {\n\t\tpath, err := filepath.Abs(path)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"newWatcher: %w\", err)\n\t\t}\n\t\tdw.initAdd(path)\n\t}\n\n\treturn dw, nil\n}\n\nvar _ Notify = &fseventNotify{}\n"
  },
  {
    "path": "pkg/watch/watcher_darwin_test.go",
    "content": "//go:build fsnotify\n\n/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"testing\"\n\n\t\"gotest.tools/v3/assert\"\n)\n\nfunc TestFseventNotifyCloseIdempotent(t *testing.T) {\n\t// Create a watcher with a temporary directory\n\ttmpDir := t.TempDir()\n\twatcher, err := newWatcher([]string{tmpDir})\n\tassert.NilError(t, err)\n\n\t// Start the watcher\n\terr = watcher.Start()\n\tassert.NilError(t, err)\n\n\t// Close should work the first time\n\terr = watcher.Close()\n\tassert.NilError(t, err)\n\n\t// Close should be idempotent - calling it again should not panic\n\terr = watcher.Close()\n\tassert.NilError(t, err)\n\n\t// Even a third time should be safe\n\terr = watcher.Close()\n\tassert.NilError(t, err)\n}\n"
  },
  {
    "path": "pkg/watch/watcher_naive.go",
    "content": "//go:build !fsnotify\n\n/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"fmt\"\n\t\"io/fs\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strings\"\n\n\t\"github.com/sirupsen/logrus\"\n\t\"github.com/tilt-dev/fsnotify\"\n\n\tpathutil \"github.com/docker/compose/v5/internal/paths\"\n)\n\n// A naive file watcher that uses the plain fsnotify API.\n// Used on all non-Darwin systems (including Windows & Linux).\n//\n// All OS-specific codepaths are handled by fsnotify.\ntype naiveNotify struct {\n\t// Paths that we're watching that should be passed up to the caller.\n\t// Note that we may have to watch ancestors of these paths\n\t// in order to fulfill the API promise.\n\t//\n\t// We often need to check if paths are a child of a path in\n\t// the notify list. It might be better to store this in a tree\n\t// structure, so we can filter the list quickly.\n\tnotifyList map[string]bool\n\n\tisWatcherRecursive bool\n\twatcher            *fsnotify.Watcher\n\tevents             chan fsnotify.Event\n\twrappedEvents      chan FileEvent\n\terrors             chan error\n\tnumWatches         int64\n}\n\nfunc (d *naiveNotify) Start() error {\n\tif len(d.notifyList) == 0 {\n\t\treturn nil\n\t}\n\n\tpathsToWatch := []string{}\n\tfor path := range d.notifyList {\n\t\tpathsToWatch = append(pathsToWatch, path)\n\t}\n\n\tpathsToWatch, err := greatestExistingAncestors(pathsToWatch)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif d.isWatcherRecursive {\n\t\tpathsToWatch = pathutil.EncompassingPaths(pathsToWatch)\n\t}\n\n\tfor _, name := range pathsToWatch {\n\t\tfi, err := os.Stat(name)\n\t\tif err != nil && !os.IsNotExist(err) {\n\t\t\treturn fmt.Errorf(\"notify.Add(%q): %w\", name, err)\n\t\t}\n\n\t\t// if it's a file that doesn't exist,\n\t\t// we should have caught that above, let's just skip it.\n\t\tif os.IsNotExist(err) {\n\t\t\tcontinue\n\t\t}\n\n\t\tif fi.IsDir() {\n\t\t\terr = d.watchRecursively(name)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"notify.Add(%q): %w\", name, err)\n\t\t\t}\n\t\t} else {\n\t\t\terr = d.add(filepath.Dir(name))\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"notify.Add(%q): %w\", filepath.Dir(name), err)\n\t\t\t}\n\t\t}\n\t}\n\n\tgo d.loop()\n\n\treturn nil\n}\n\nfunc (d *naiveNotify) watchRecursively(dir string) error {\n\tif d.isWatcherRecursive {\n\t\terr := d.add(dir)\n\t\tif err == nil || os.IsNotExist(err) {\n\t\t\treturn nil\n\t\t}\n\t\treturn fmt.Errorf(\"watcher.Add(%q): %w\", dir, err)\n\t}\n\n\treturn filepath.WalkDir(dir, func(path string, info fs.DirEntry, err error) error {\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tif !info.IsDir() {\n\t\t\treturn nil\n\t\t}\n\n\t\tif d.shouldSkipDir(path) {\n\t\t\tlogrus.Debugf(\"Ignoring directory and its contents (recursively): %s\", path)\n\t\t\treturn filepath.SkipDir\n\t\t}\n\n\t\terr = d.add(path)\n\t\tif err != nil {\n\t\t\tif os.IsNotExist(err) {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\treturn fmt.Errorf(\"watcher.Add(%q): %w\", path, err)\n\t\t}\n\t\treturn nil\n\t})\n}\n\nfunc (d *naiveNotify) Close() error {\n\tnumberOfWatches.Add(-d.numWatches)\n\td.numWatches = 0\n\treturn d.watcher.Close()\n}\n\nfunc (d *naiveNotify) Events() chan FileEvent {\n\treturn d.wrappedEvents\n}\n\nfunc (d *naiveNotify) Errors() chan error {\n\treturn d.errors\n}\n\nfunc (d *naiveNotify) loop() { //nolint:gocyclo\n\tdefer close(d.wrappedEvents)\n\tfor e := range d.events {\n\t\t// The Windows fsnotify event stream sometimes gets events with empty names\n\t\t// that are also sent to the error stream. Hmmmm...\n\t\tif e.Name == \"\" {\n\t\t\tcontinue\n\t\t}\n\n\t\tif e.Op&fsnotify.Create != fsnotify.Create {\n\t\t\tif d.shouldNotify(e.Name) {\n\t\t\t\td.wrappedEvents <- FileEvent(e.Name)\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\tif d.isWatcherRecursive {\n\t\t\tif d.shouldNotify(e.Name) {\n\t\t\t\td.wrappedEvents <- FileEvent(e.Name)\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\t// If the watcher is not recursive, we have to walk the tree\n\t\t// and add watches manually. We fire the event while we're walking the tree.\n\t\t// because it's a bit more elegant that way.\n\t\t//\n\t\t// TODO(dbentley): if there's a delete should we call d.watcher.Remove to prevent leaking?\n\t\terr := filepath.WalkDir(e.Name, func(path string, info fs.DirEntry, err error) error {\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tif d.shouldNotify(path) {\n\t\t\t\td.wrappedEvents <- FileEvent(path)\n\t\t\t}\n\n\t\t\t// TODO(dmiller): symlinks 😭\n\n\t\t\tshouldWatch := false\n\t\t\tif info.IsDir() {\n\t\t\t\t// watch directories unless we can skip them entirely\n\t\t\t\tif d.shouldSkipDir(path) {\n\t\t\t\t\treturn filepath.SkipDir\n\t\t\t\t}\n\n\t\t\t\tshouldWatch = true\n\t\t\t} else {\n\t\t\t\t// watch files that are explicitly named, but don't watch others\n\t\t\t\t_, ok := d.notifyList[path]\n\t\t\t\tif ok {\n\t\t\t\t\tshouldWatch = true\n\t\t\t\t}\n\t\t\t}\n\t\t\tif shouldWatch {\n\t\t\t\terr := d.add(path)\n\t\t\t\tif err != nil && !os.IsNotExist(err) {\n\t\t\t\t\tlogrus.Infof(\"Error watching path %s: %s\", e.Name, err)\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn nil\n\t\t})\n\t\tif err != nil && !os.IsNotExist(err) {\n\t\t\tlogrus.Infof(\"Error walking directory %s: %s\", e.Name, err)\n\t\t}\n\t}\n}\n\nfunc (d *naiveNotify) shouldNotify(path string) bool {\n\tif _, ok := d.notifyList[path]; ok {\n\t\t// We generally don't care when directories change at the root of an ADD\n\t\tstat, err := os.Lstat(path)\n\t\tisDir := err == nil && stat.IsDir()\n\t\treturn !isDir\n\t}\n\n\tfor root := range d.notifyList {\n\t\tif pathutil.IsChild(root, path) {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\nfunc (d *naiveNotify) shouldSkipDir(path string) bool {\n\t// If path is directly in the notifyList, we should always watch it.\n\tif d.notifyList[path] {\n\t\treturn false\n\t}\n\n\t// Suppose we're watching\n\t// /src/.tiltignore\n\t// but the .tiltignore file doesn't exist.\n\t//\n\t// Our watcher will create an inotify watch on /src/.\n\t//\n\t// But then we want to make sure we don't recurse from /src/ down to /src/node_modules.\n\t//\n\t// To handle this case, we only want to traverse dirs that are:\n\t// - A child of a directory that's in our notify list, or\n\t// - A parent of a directory that's in our notify list\n\t//   (i.e., to cover the \"path doesn't exist\" case).\n\tfor root := range d.notifyList {\n\t\tif pathutil.IsChild(root, path) || pathutil.IsChild(path, root) {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\nfunc (d *naiveNotify) add(path string) error {\n\terr := d.watcher.Add(path)\n\tif err != nil {\n\t\treturn err\n\t}\n\td.numWatches++\n\tnumberOfWatches.Add(1)\n\treturn nil\n}\n\nfunc newWatcher(paths []string) (Notify, error) {\n\tfsw, err := fsnotify.NewWatcher()\n\tif err != nil {\n\t\tif strings.Contains(err.Error(), \"too many open files\") && runtime.GOOS == \"linux\" {\n\t\t\treturn nil, fmt.Errorf(\"hit OS limits creating a watcher.\\n\" +\n\t\t\t\t\"Run 'sysctl fs.inotify.max_user_instances' to check your inotify limits.\\n\" +\n\t\t\t\t\"To raise them, run 'sudo sysctl fs.inotify.max_user_instances=1024'\")\n\t\t}\n\t\treturn nil, fmt.Errorf(\"creating file watcher: %w\", err)\n\t}\n\tMaybeIncreaseBufferSize(fsw)\n\n\terr = fsw.SetRecursive()\n\tisWatcherRecursive := err == nil\n\n\twrappedEvents := make(chan FileEvent)\n\tnotifyList := make(map[string]bool, len(paths))\n\tif isWatcherRecursive {\n\t\tpaths = pathutil.EncompassingPaths(paths)\n\t}\n\tfor _, path := range paths {\n\t\tpath, err := filepath.Abs(path)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"newWatcher: %w\", err)\n\t\t}\n\t\tnotifyList[path] = true\n\t}\n\n\twmw := &naiveNotify{\n\t\tnotifyList:         notifyList,\n\t\twatcher:            fsw,\n\t\tevents:             fsw.Events,\n\t\twrappedEvents:      wrappedEvents,\n\t\terrors:             fsw.Errors,\n\t\tisWatcherRecursive: isWatcherRecursive,\n\t}\n\n\treturn wmw, nil\n}\n\nvar _ Notify = &naiveNotify{}\n\nfunc greatestExistingAncestors(paths []string) ([]string, error) {\n\tresult := []string{}\n\tfor _, p := range paths {\n\t\tnewP, err := greatestExistingAncestor(p)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"finding ancestor of %s: %w\", p, err)\n\t\t}\n\t\tresult = append(result, newP)\n\t}\n\treturn result, nil\n}\n"
  },
  {
    "path": "pkg/watch/watcher_naive_test.go",
    "content": "/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc TestDontWatchEachFile(t *testing.T) {\n\tif runtime.GOOS != \"linux\" {\n\t\tt.Skip(\"This test uses linux-specific inotify checks\")\n\t}\n\n\t// fsnotify is not recursive, so we need to watch each directory\n\t// you can watch individual files with fsnotify, but that is more prone to exhaust resources\n\t// this test uses a Linux way to get the number of watches to make sure we're watching\n\t// per-directory, not per-file\n\tf := newNotifyFixture(t)\n\n\twatched := f.TempDir(\"watched\")\n\n\t// there are a few different cases we want to test for because the code paths are slightly\n\t// different:\n\t// 1) initial: data there before we ever call watch\n\t// 2) inplace: data we create while the watch is happening\n\t// 3) staged: data we create in another directory and then atomically move into place\n\n\t// initial\n\tf.WriteFile(f.JoinPath(watched, \"initial.txt\"), \"initial data\")\n\n\tinitialDir := f.JoinPath(watched, \"initial_dir\")\n\tif err := os.Mkdir(initialDir, 0o777); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tfor i := range 100 {\n\t\tf.WriteFile(f.JoinPath(initialDir, fmt.Sprintf(\"%d\", i)), \"initial data\")\n\t}\n\n\tf.watch(watched)\n\tf.fsync()\n\tif len(f.events) != 0 {\n\t\tt.Fatalf(\"expected 0 initial events; got %d events: %v\", len(f.events), f.events)\n\t}\n\tf.events = nil\n\n\t// inplace\n\tinplace := f.JoinPath(watched, \"inplace\")\n\tif err := os.Mkdir(inplace, 0o777); err != nil {\n\t\tt.Fatal(err)\n\t}\n\tf.WriteFile(f.JoinPath(inplace, \"inplace.txt\"), \"inplace data\")\n\n\tinplaceDir := f.JoinPath(inplace, \"inplace_dir\")\n\tif err := os.Mkdir(inplaceDir, 0o777); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tfor i := range 100 {\n\t\tf.WriteFile(f.JoinPath(inplaceDir, fmt.Sprintf(\"%d\", i)), \"inplace data\")\n\t}\n\n\tf.fsync()\n\tif len(f.events) < 100 {\n\t\tt.Fatalf(\"expected >100 inplace events; got %d events: %v\", len(f.events), f.events)\n\t}\n\tf.events = nil\n\n\t// staged\n\tstaged := f.TempDir(\"staged\")\n\tf.WriteFile(f.JoinPath(staged, \"staged.txt\"), \"staged data\")\n\n\tstagedDir := f.JoinPath(staged, \"staged_dir\")\n\tif err := os.Mkdir(stagedDir, 0o777); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tfor i := range 100 {\n\t\tf.WriteFile(f.JoinPath(stagedDir, fmt.Sprintf(\"%d\", i)), \"staged data\")\n\t}\n\n\tif err := os.Rename(staged, f.JoinPath(watched, \"staged\")); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tf.fsync()\n\tif len(f.events) < 100 {\n\t\tt.Fatalf(\"expected >100 staged events; got %d events: %v\", len(f.events), f.events)\n\t}\n\tf.events = nil\n\n\tn, err := inotifyNodes()\n\trequire.NoError(t, err)\n\tif n > 10 {\n\t\tt.Fatalf(\"watching more than 10 files: %d\", n)\n\t}\n}\n\nfunc inotifyNodes() (int, error) {\n\tpid := os.Getpid()\n\n\toutput, err := exec.Command(\"/bin/sh\", \"-c\", fmt.Sprintf(\n\t\t\"find /proc/%d/fd -lname anon_inode:inotify -printf '%%hinfo/%%f\\n' | xargs cat | grep -c '^inotify'\", pid)).Output()\n\tif err != nil {\n\t\treturn 0, fmt.Errorf(\"error running command to determine number of watched files: %w\\n %s\", err, output)\n\t}\n\n\tn, err := strconv.Atoi(strings.TrimSpace(string(output)))\n\tif err != nil {\n\t\treturn 0, fmt.Errorf(\"couldn't parse number of watched files: %w\", err)\n\t}\n\treturn n, nil\n}\n\nfunc TestDontRecurseWhenWatchingParentsOfNonExistentFiles(t *testing.T) {\n\tif runtime.GOOS != \"linux\" {\n\t\tt.Skip(\"This test uses linux-specific inotify checks\")\n\t}\n\n\tf := newNotifyFixture(t)\n\n\twatched := f.TempDir(\"watched\")\n\tf.watch(filepath.Join(watched, \".tiltignore\"))\n\n\texcludedDir := f.JoinPath(watched, \"excluded\")\n\tfor i := range 10 {\n\t\tf.WriteFile(f.JoinPath(excludedDir, fmt.Sprintf(\"%d\", i), \"data.txt\"), \"initial data\")\n\t}\n\tf.fsync()\n\n\tn, err := inotifyNodes()\n\trequire.NoError(t, err)\n\tif n > 5 {\n\t\tt.Fatalf(\"watching more than 5 files: %d\", n)\n\t}\n}\n"
  },
  {
    "path": "pkg/watch/watcher_nonwin.go",
    "content": "//go:build !windows\n\n/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport \"github.com/tilt-dev/fsnotify\"\n\nfunc MaybeIncreaseBufferSize(w *fsnotify.Watcher) {\n\t// Not needed on non-windows\n}\n"
  },
  {
    "path": "pkg/watch/watcher_windows.go",
    "content": "//go:build windows\n\n/*\n   Copyright 2020 Docker Compose CLI authors\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n*/\n\npackage watch\n\nimport (\n\t\"github.com/tilt-dev/fsnotify\"\n)\n\n// TODO(nick): I think the ideal API would be to automatically increase the\n// size of the buffer when we exceed capacity. But this gets messy,\n// because each time we get a short read error, we need to invalidate\n// everything we know about the currently changed files. So for now,\n// we just provide a way for people to increase the buffer ourselves.\n//\n// It might also pay to be clever about sizing the buffer\n// relative the number of files in the directory we're watching.\nfunc MaybeIncreaseBufferSize(w *fsnotify.Watcher) {\n\tw.SetBufferSize(DesiredWindowsBufferSize())\n}\n"
  }
]