[
  {
    "path": ".gitattributes",
    "content": ".github/workflows/*.lock.yml linguist-generated=true merge=ours"
  },
  {
    "path": ".github/FUNDING.yml",
    "content": "# These are supported funding model platforms\n\ngithub: fredeil\npatreon: # Replace with a single Patreon username\nopen_collective: # Replace with a single Open Collective username\nko_fi: # Replace with a single Ko-fi username\ntidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel\ncommunity_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry\nliberapay: # Replace with a single Liberapay username\nissuehunt: # Replace with a single IssueHunt username\notechie: # Replace with a single Otechie username\ncustom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']\n"
  },
  {
    "path": ".github/copilot-instructions.md",
    "content": "# Copilot Instructions\n\n## Commands\n\n```bash\ndart pub get          # install dependencies\ndart test             # run all tests\ndart test test/email_validator_test.dart  # run the single test file\ndart analyze --fatal-infos  # lint/analyze\ndart format --output=none --set-exit-if-changed .  # check formatting\ndart format .         # auto-format\n```\n\n## Architecture\n\nThis is a minimal single-file Dart package (`lib/email_validator.dart`) that validates email addresses without using RegEx, implementing RFC-compliant parsing manually.\n\nThe `EmailValidator` class uses a **cursor-based parser** with a shared static `_index` field that advances through the email string. Parsing proceeds in two phases:\n1. **Local part** – either a quoted string (`_skipQuoted`) or dot-separated atoms (`_skipAtom`)\n2. **Domain part** – either a domain name (`_skipDomain`/`_skipSubDomain`) or an address literal (`_skipIPv4Literal`/`_skipIPv6Literal`)\n\n`SubdomainType` enum tracks whether the current subdomain is alphabetic, numeric, or alphanumeric — used to reject all-numeric TLDs (e.g. `user@127.0.0.1` without brackets).\n\nPublic API is a single static method:\n```dart\nEmailValidator.validate(String email, [bool allowTopLevelDomains = false, bool allowInternational = true])\n```\n\n## Key Conventions\n\n- All private helpers are `static` and mutate the shared `static int _index` — the class is stateless between calls (reset at the start of `validate`), but **not thread-safe**.\n- `allowInternational` controls whether non-ASCII characters (codeUnit ≥ 128) are accepted in local part and domain labels.\n- Test cases in `test/email_validator_test.dart` are the canonical source of truth for expected behavior — valid/invalid/international address lists are maintained there directly.\n- Releases are managed via the `Release` GitHub Actions workflow (manual `workflow_dispatch`), which bumps `pubspec.yaml`, updates `CHANGELOG.md`, commits, and tags. Publishing to pub.dev triggers automatically on version tags matching `v*.*.*`.\n- When updating the version, update both `pubspec.yaml` and `CHANGELOG.md`.\n"
  },
  {
    "path": ".github/workflows/ci.yml",
    "content": "name: CI\n\non:\n  push:\n    branches: [master]\n  pull_request:\n    branches: [master]\n\njobs:\n  test:\n    runs-on: ubuntu-latest\n    permissions:\n      contents: read\n    steps:\n      - uses: actions/checkout@v4\n      - uses: dart-lang/setup-dart@v1\n      - run: dart pub get\n      - run: dart format --output=none --set-exit-if-changed .\n      - run: dart analyze --fatal-infos\n      - run: dart test\n"
  },
  {
    "path": ".github/workflows/publish.yaml",
    "content": "name: Publish to pub.dev\n\non:\n  push:\n    tags:\n      - 'v[0-9]+.[0-9]+.[0-9]+*'\n\njobs:\n  publish:\n    permissions:\n      id-token: write\n    uses: dart-lang/setup-dart/.github/workflows/publish.yml@v1\n"
  },
  {
    "path": ".github/workflows/release.yml",
    "content": "name: Release\n\non:\n  workflow_dispatch:\n    inputs:\n      version:\n        description: \"Version: package version and git tag to create (e.g. 3.1.0)\"\n        required: true\n\n      changelog:\n        description: \"CHANGELOG.md notes\"\n        required: true\n        default: \"Bug fixes and performance improvements\"\n\n      pre_release:\n        description: \"Whether the release is a prerelease\"\n        required: true\n        default: \"false\"\n        type: choice\n        options:\n          - \"false\"\n          - \"true\"\n\npermissions:\n  contents: write\n\njobs:\n  release:\n    runs-on: ubuntu-latest\n\n    steps:\n      - uses: actions/checkout@v4\n\n      - uses: dart-lang/setup-dart@v1\n        with:\n          sdk: stable\n\n      - name: Install dependencies\n        run: dart pub get\n\n      - name: Verify formatting\n        run: dart format --output=none --set-exit-if-changed .\n\n      - name: Analyze\n        run: dart analyze --fatal-infos\n\n      - name: Run tests\n        run: dart test\n\n      - name: Determine version tag\n        id: version\n        run: |\n          TAG=${{ inputs.version }}\n          if [ \"${{ inputs.pre_release }}\" = \"true\" ]; then\n            TAG=\"${TAG}-dev\"\n          fi\n          echo \"tag=${TAG}\" >> \"$GITHUB_OUTPUT\"\n\n      - name: Update pubspec.yaml version\n        run: |\n          sed -i \"s/^version: .*/version: ${{ steps.version.outputs.tag }}/\" pubspec.yaml\n\n      - name: Update CHANGELOG.md\n        run: |\n          printf '%s\\n%s\\n\\n%s' \"## ${{ steps.version.outputs.tag }}\" \"${{ inputs.changelog }}\" \"$(cat CHANGELOG.md)\" > CHANGELOG.md\n\n      - name: Commit and push version bump\n        run: |\n          git config user.name \"github-actions[bot]\"\n          git config user.email \"41898282+github-actions[bot]@users.noreply.github.com\"\n          git add pubspec.yaml CHANGELOG.md\n          git commit -m \"chore: bump version to ${{ steps.version.outputs.tag }}\"\n          git push origin HEAD:${{ github.ref_name }}\n\n      - name: Create GitHub release\n        uses: softprops/action-gh-release@v2\n        with:\n          tag_name: v${{ steps.version.outputs.tag }}\n          body: ${{ inputs.changelog }}\n          prerelease: ${{ inputs.pre_release == 'true' }}\n          target_commitish: ${{ github.ref_name }}\n"
  },
  {
    "path": ".github/workflows/repo-assist.lock.yml",
    "content": "#\n#    ___                   _   _      \n#   / _ \\                 | | (_)     \n#  | |_| | __ _  ___ _ __ | |_ _  ___ \n#  |  _  |/ _` |/ _ \\ '_ \\| __| |/ __|\n#  | | | | (_| |  __/ | | | |_| | (__ \n#  \\_| |_/\\__, |\\___|_| |_|\\__|_|\\___|\n#          __/ |\n#  _    _ |___/ \n# | |  | |                / _| |\n# | |  | | ___ _ __ _  __| |_| | _____      ____\n# | |/\\| |/ _ \\ '__| |/ /|  _| |/ _ \\ \\ /\\ / / ___|\n# \\  /\\  / (_) | | | | ( | | | | (_) \\ V  V /\\__ \\\n#  \\/  \\/ \\___/|_| |_|\\_\\|_| |_|\\___/ \\_/\\_/ |___/\n#\n# This file was automatically generated by gh-aw (v0.50.2). DO NOT EDIT.\n#\n# To update this file, edit githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd and run:\n#   gh aw compile\n# Not all edits will cause changes to this file.\n#\n# For more information: https://github.github.com/gh-aw/introduction/overview/\n#\n# A friendly repository assistant that runs daily to support contributors and maintainers.\n# Can also be triggered on-demand via '/repo-assist <instructions>' to perform specific tasks.\n# - Comments helpfully on open issues to unblock contributors and onboard newcomers\n# - Identifies issues that can be fixed and creates draft pull requests with fixes\n# - Studies the codebase and proposes improvements via PRs\n# - Updates its own PRs when CI fails or merge conflicts arise\n# - Nudges stale PRs waiting for author response\n# - Manages issue and PR labels for organization\n# - Prepares releases by updating changelogs and proposing version bumps\n# - Welcomes new contributors with friendly onboarding\n# - Maintains a persistent memory of work done and what remains\n# Always polite, constructive, and mindful of the project's goals.\n#\n# Source: githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd\n#\n# gh-aw-metadata: {\"schema_version\":\"v1\",\"frontmatter_hash\":\"c5b7fa09add2feca4fc9f81e36b59e7bef152488b3f2d34bda0bec18b300e050\",\"compiler_version\":\"v0.50.2\"}\n\nname: \"Repo Assist\"\n\"on\":\n  discussion:\n    types:\n    - created\n    - edited\n  discussion_comment:\n    types:\n    - created\n    - edited\n  issue_comment:\n    types:\n    - created\n    - edited\n  issues:\n    types:\n    - opened\n    - edited\n    - reopened\n  pull_request:\n    types:\n    - opened\n    - edited\n    - reopened\n  pull_request_review_comment:\n    types:\n    - created\n    - edited\n  schedule:\n  - cron: \"20 7 * * *\"\n  workflow_dispatch: null\n\npermissions: {}\n\nconcurrency:\n  group: \"gh-aw-${{ github.workflow }}-${{ github.event.issue.number || github.event.pull_request.number }}\"\n\nrun-name: \"Repo Assist\"\n\njobs:\n  activation:\n    needs: pre_activation\n    if: >\n      (needs.pre_activation.outputs.activated == 'true') && (((github.event_name == 'issues' || github.event_name == 'issue_comment' ||\n      github.event_name == 'pull_request' || github.event_name == 'pull_request_review_comment' || github.event_name == 'discussion' ||\n      github.event_name == 'discussion_comment') && ((github.event_name == 'issues') && ((startsWith(github.event.issue.body, '/repo-assist ')) ||\n      (github.event.issue.body == '/repo-assist')) || (github.event_name == 'issue_comment') && (((startsWith(github.event.comment.body, '/repo-assist ')) ||\n      (github.event.comment.body == '/repo-assist')) && (github.event.issue.pull_request == null)) || (github.event_name == 'issue_comment') &&\n      (((startsWith(github.event.comment.body, '/repo-assist ')) || (github.event.comment.body == '/repo-assist')) &&\n      (github.event.issue.pull_request != null)) || (github.event_name == 'pull_request_review_comment') &&\n      ((startsWith(github.event.comment.body, '/repo-assist ')) || (github.event.comment.body == '/repo-assist')) ||\n      (github.event_name == 'pull_request') && ((startsWith(github.event.pull_request.body, '/repo-assist ')) ||\n      (github.event.pull_request.body == '/repo-assist')) || (github.event_name == 'discussion') && ((startsWith(github.event.discussion.body, '/repo-assist ')) ||\n      (github.event.discussion.body == '/repo-assist')) || (github.event_name == 'discussion_comment') && ((startsWith(github.event.comment.body, '/repo-assist ')) ||\n      (github.event.comment.body == '/repo-assist')))) || (!(github.event_name == 'issues' || github.event_name == 'issue_comment' ||\n      github.event_name == 'pull_request' || github.event_name == 'pull_request_review_comment' || github.event_name == 'discussion' ||\n      github.event_name == 'discussion_comment')))\n    runs-on: ubuntu-slim\n    permissions:\n      contents: read\n      discussions: write\n      issues: write\n      pull-requests: write\n    outputs:\n      body: ${{ steps.sanitized.outputs.body }}\n      comment_id: \"\"\n      comment_repo: \"\"\n      slash_command: ${{ needs.pre_activation.outputs.matched_command }}\n      text: ${{ steps.sanitized.outputs.text }}\n      title: ${{ steps.sanitized.outputs.title }}\n    steps:\n      - name: Setup Scripts\n        uses: github/gh-aw/actions/setup@e32435511ac2c5aa0e08b19284a25dc98fadf1e1 # v0.50.2\n        with:\n          destination: /opt/gh-aw/actions\n      - name: Validate context variables\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        with:\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/validate_context_variables.cjs');\n            await main();\n      - name: Checkout .github and .agents folders\n        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n        with:\n          sparse-checkout: |\n            .github\n            .agents\n          fetch-depth: 1\n          persist-credentials: false\n      - name: Check workflow file timestamps\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_WORKFLOW_FILE: \"repo-assist.lock.yml\"\n        with:\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/check_workflow_timestamp_api.cjs');\n            await main();\n      - name: Compute current body text\n        id: sanitized\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        with:\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/compute_text.cjs');\n            await main();\n      - name: Create prompt with built-in context\n        env:\n          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt\n          GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}\n          GH_AW_GITHUB_ACTOR: ${{ github.actor }}\n          GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }}\n          GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }}\n          GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }}\n          GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }}\n          GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}\n          GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}\n          GH_AW_GITHUB_SERVER_URL: ${{ github.server_url }}\n          GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}\n          GH_AW_IS_PR_COMMENT: ${{ github.event.issue.pull_request && 'true' || '' }}\n          GH_AW_STEPS_SANITIZED_OUTPUTS_TEXT: ${{ steps.sanitized.outputs.text }}\n        run: |\n          bash /opt/gh-aw/actions/create_prompt_first.sh\n          {\n          cat << 'GH_AW_PROMPT_EOF'\n          <system>\n          GH_AW_PROMPT_EOF\n          cat \"/opt/gh-aw/prompts/xpia.md\"\n          cat \"/opt/gh-aw/prompts/temp_folder_prompt.md\"\n          cat \"/opt/gh-aw/prompts/markdown.md\"\n          cat \"/opt/gh-aw/prompts/repo_memory_prompt.md\"\n          cat \"/opt/gh-aw/prompts/safe_outputs_prompt.md\"\n          cat << 'GH_AW_PROMPT_EOF'\n          <safe-output-tools>\n          Tools: add_comment, create_issue, update_issue, create_pull_request, add_labels, remove_labels, push_to_pull_request_branch, missing_tool, missing_data\n          GH_AW_PROMPT_EOF\n          cat \"/opt/gh-aw/prompts/safe_outputs_create_pull_request.md\"\n          cat \"/opt/gh-aw/prompts/safe_outputs_push_to_pr_branch.md\"\n          cat << 'GH_AW_PROMPT_EOF'\n          </safe-output-tools>\n          <github-context>\n          The following GitHub context information is available for this workflow:\n          {{#if __GH_AW_GITHUB_ACTOR__ }}\n          - **actor**: __GH_AW_GITHUB_ACTOR__\n          {{/if}}\n          {{#if __GH_AW_GITHUB_REPOSITORY__ }}\n          - **repository**: __GH_AW_GITHUB_REPOSITORY__\n          {{/if}}\n          {{#if __GH_AW_GITHUB_WORKSPACE__ }}\n          - **workspace**: __GH_AW_GITHUB_WORKSPACE__\n          {{/if}}\n          {{#if __GH_AW_GITHUB_EVENT_ISSUE_NUMBER__ }}\n          - **issue-number**: #__GH_AW_GITHUB_EVENT_ISSUE_NUMBER__\n          {{/if}}\n          {{#if __GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER__ }}\n          - **discussion-number**: #__GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER__\n          {{/if}}\n          {{#if __GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER__ }}\n          - **pull-request-number**: #__GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER__\n          {{/if}}\n          {{#if __GH_AW_GITHUB_EVENT_COMMENT_ID__ }}\n          - **comment-id**: __GH_AW_GITHUB_EVENT_COMMENT_ID__\n          {{/if}}\n          {{#if __GH_AW_GITHUB_RUN_ID__ }}\n          - **workflow-run-id**: __GH_AW_GITHUB_RUN_ID__\n          {{/if}}\n          </github-context>\n          \n          GH_AW_PROMPT_EOF\n          if [ \"$GITHUB_EVENT_NAME\" = \"issue_comment\" ] && [ -n \"$GH_AW_IS_PR_COMMENT\" ] || [ \"$GITHUB_EVENT_NAME\" = \"pull_request_review_comment\" ] || [ \"$GITHUB_EVENT_NAME\" = \"pull_request_review\" ]; then\n            cat \"/opt/gh-aw/prompts/pr_context_prompt.md\"\n          fi\n          cat << 'GH_AW_PROMPT_EOF'\n          </system>\n          GH_AW_PROMPT_EOF\n          cat << 'GH_AW_PROMPT_EOF'\n          {{#runtime-import .github/workflows/repo-assist.md}}\n          GH_AW_PROMPT_EOF\n          } > \"$GH_AW_PROMPT\"\n      - name: Interpolate variables and render templates\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt\n          GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}\n          GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}\n          GH_AW_GITHUB_SERVER_URL: ${{ github.server_url }}\n          GH_AW_STEPS_SANITIZED_OUTPUTS_TEXT: ${{ steps.sanitized.outputs.text }}\n        with:\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/interpolate_prompt.cjs');\n            await main();\n      - name: Substitute placeholders\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt\n          GH_AW_GITHUB_ACTOR: ${{ github.actor }}\n          GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }}\n          GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }}\n          GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }}\n          GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }}\n          GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}\n          GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}\n          GH_AW_GITHUB_SERVER_URL: ${{ github.server_url }}\n          GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}\n          GH_AW_IS_PR_COMMENT: ${{ github.event.issue.pull_request && 'true' || '' }}\n          GH_AW_MEMORY_BRANCH_NAME: 'memory/repo-assist'\n          GH_AW_MEMORY_CONSTRAINTS: \"\\n\\n**Constraints:**\\n- **Max File Size**: 10240 bytes (0.01 MB) per file\\n- **Max File Count**: 100 files per commit\\n- **Max Patch Size**: 10240 bytes (10 KB) total per push (max: 100 KB)\\n\"\n          GH_AW_MEMORY_DESCRIPTION: ''\n          GH_AW_MEMORY_DIR: '/tmp/gh-aw/repo-memory/default/'\n          GH_AW_MEMORY_TARGET_REPO: ' of the current repository'\n          GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: ${{ needs.pre_activation.outputs.activated }}\n          GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: ${{ needs.pre_activation.outputs.matched_command }}\n          GH_AW_STEPS_SANITIZED_OUTPUTS_TEXT: ${{ steps.sanitized.outputs.text }}\n        with:\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            \n            const substitutePlaceholders = require('/opt/gh-aw/actions/substitute_placeholders.cjs');\n            \n            // Call the substitution function\n            return await substitutePlaceholders({\n              file: process.env.GH_AW_PROMPT,\n              substitutions: {\n                GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR,\n                GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID,\n                GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER,\n                GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER,\n                GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER,\n                GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY,\n                GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID,\n                GH_AW_GITHUB_SERVER_URL: process.env.GH_AW_GITHUB_SERVER_URL,\n                GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE,\n                GH_AW_IS_PR_COMMENT: process.env.GH_AW_IS_PR_COMMENT,\n                GH_AW_MEMORY_BRANCH_NAME: process.env.GH_AW_MEMORY_BRANCH_NAME,\n                GH_AW_MEMORY_CONSTRAINTS: process.env.GH_AW_MEMORY_CONSTRAINTS,\n                GH_AW_MEMORY_DESCRIPTION: process.env.GH_AW_MEMORY_DESCRIPTION,\n                GH_AW_MEMORY_DIR: process.env.GH_AW_MEMORY_DIR,\n                GH_AW_MEMORY_TARGET_REPO: process.env.GH_AW_MEMORY_TARGET_REPO,\n                GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED,\n                GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND,\n                GH_AW_STEPS_SANITIZED_OUTPUTS_TEXT: process.env.GH_AW_STEPS_SANITIZED_OUTPUTS_TEXT\n              }\n            });\n      - name: Validate prompt placeholders\n        env:\n          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt\n        run: bash /opt/gh-aw/actions/validate_prompt_placeholders.sh\n      - name: Print prompt\n        env:\n          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt\n        run: bash /opt/gh-aw/actions/print_prompt_summary.sh\n      - name: Upload prompt artifact\n        if: success()\n        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6\n        with:\n          name: prompt\n          path: /tmp/gh-aw/aw-prompts/prompt.txt\n          retention-days: 1\n\n  agent:\n    needs: activation\n    runs-on: ubuntu-latest\n    permissions: read-all\n    env:\n      DEFAULT_BRANCH: ${{ github.event.repository.default_branch }}\n      GH_AW_ASSETS_ALLOWED_EXTS: \"\"\n      GH_AW_ASSETS_BRANCH: \"\"\n      GH_AW_ASSETS_MAX_SIZE_KB: 0\n      GH_AW_MCP_LOG_DIR: /tmp/gh-aw/mcp-logs/safeoutputs\n      GH_AW_SAFE_OUTPUTS: /opt/gh-aw/safeoutputs/outputs.jsonl\n      GH_AW_SAFE_OUTPUTS_CONFIG_PATH: /opt/gh-aw/safeoutputs/config.json\n      GH_AW_SAFE_OUTPUTS_TOOLS_PATH: /opt/gh-aw/safeoutputs/tools.json\n      GH_AW_WORKFLOW_ID_SANITIZED: repoassist\n    outputs:\n      checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }}\n      detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}\n      detection_success: ${{ steps.detection_conclusion.outputs.success }}\n      has_patch: ${{ steps.collect_output.outputs.has_patch }}\n      model: ${{ steps.generate_aw_info.outputs.model }}\n      output: ${{ steps.collect_output.outputs.output }}\n      output_types: ${{ steps.collect_output.outputs.output_types }}\n      secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}\n    steps:\n      - name: Setup Scripts\n        uses: github/gh-aw/actions/setup@e32435511ac2c5aa0e08b19284a25dc98fadf1e1 # v0.50.2\n        with:\n          destination: /opt/gh-aw/actions\n      - name: Checkout repository\n        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n        with:\n          persist-credentials: false\n      - name: Create gh-aw temp directory\n        run: bash /opt/gh-aw/actions/create_gh_aw_tmp_dir.sh\n      # Repo memory git-based storage configuration from frontmatter processed below\n      - name: Clone repo-memory branch (default)\n        env:\n          GH_TOKEN: ${{ github.token }}\n          GITHUB_SERVER_URL: ${{ github.server_url }}\n          BRANCH_NAME: memory/repo-assist\n          TARGET_REPO: ${{ github.repository }}\n          MEMORY_DIR: /tmp/gh-aw/repo-memory/default\n          CREATE_ORPHAN: true\n        run: bash /opt/gh-aw/actions/clone_repo_memory_branch.sh\n      - name: Configure Git credentials\n        env:\n          REPO_NAME: ${{ github.repository }}\n          SERVER_URL: ${{ github.server_url }}\n        run: |\n          git config --global user.email \"github-actions[bot]@users.noreply.github.com\"\n          git config --global user.name \"github-actions[bot]\"\n          git config --global am.keepcr true\n          # Re-authenticate git with GitHub token\n          SERVER_URL_STRIPPED=\"${SERVER_URL#https://}\"\n          git remote set-url origin \"https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git\"\n          echo \"Git configured with standard GitHub Actions identity\"\n      - name: Checkout PR branch\n        id: checkout-pr\n        if: |\n          github.event.pull_request\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}\n        with:\n          github-token: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');\n            await main();\n      - name: Generate agentic run info\n        id: generate_aw_info\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        with:\n          script: |\n            const fs = require('fs');\n            \n            const awInfo = {\n              engine_id: \"copilot\",\n              engine_name: \"GitHub Copilot CLI\",\n              model: process.env.GH_AW_MODEL_AGENT_COPILOT || \"\",\n              version: \"\",\n              agent_version: \"0.0.415\",\n              cli_version: \"v0.50.2\",\n              workflow_name: \"Repo Assist\",\n              experimental: false,\n              supports_tools_allowlist: true,\n              run_id: context.runId,\n              run_number: context.runNumber,\n              run_attempt: process.env.GITHUB_RUN_ATTEMPT,\n              repository: context.repo.owner + '/' + context.repo.repo,\n              ref: context.ref,\n              sha: context.sha,\n              actor: context.actor,\n              event_name: context.eventName,\n              staged: false,\n              allowed_domains: [\"defaults\",\"dotnet\",\"node\",\"python\",\"rust\",\"java\"],\n              firewall_enabled: true,\n              awf_version: \"v0.23.0\",\n              awmg_version: \"v0.1.5\",\n              steps: {\n                firewall: \"squid\"\n              },\n              created_at: new Date().toISOString()\n            };\n            \n            // Write to /tmp/gh-aw directory to avoid inclusion in PR\n            const tmpPath = '/tmp/gh-aw/aw_info.json';\n            fs.writeFileSync(tmpPath, JSON.stringify(awInfo, null, 2));\n            console.log('Generated aw_info.json at:', tmpPath);\n            console.log(JSON.stringify(awInfo, null, 2));\n            \n            // Set model as output for reuse in other steps/jobs\n            core.setOutput('model', awInfo.model);\n      - name: Validate COPILOT_GITHUB_TOKEN secret\n        id: validate-secret\n        run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default\n        env:\n          COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}\n      - name: Install GitHub Copilot CLI\n        run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.415\n      - name: Install awf binary\n        run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0\n      - name: Determine automatic lockdown mode for GitHub MCP Server\n        id: determine-automatic-lockdown\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}\n          GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}\n        with:\n          script: |\n            const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');\n            await determineAutomaticLockdown(github, context, core);\n      - name: Download container images\n        run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.5 ghcr.io/github/github-mcp-server:v0.31.0 node:lts-alpine\n      - name: Write Safe Outputs Config\n        run: |\n          mkdir -p /opt/gh-aw/safeoutputs\n          mkdir -p /tmp/gh-aw/safeoutputs\n          mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs\n          cat > /opt/gh-aw/safeoutputs/config.json << 'GH_AW_SAFE_OUTPUTS_CONFIG_EOF'\n          {\"add_comment\":{\"max\":10,\"target\":\"*\"},\"add_labels\":{\"allowed\":[\"bug\",\"enhancement\",\"help wanted\",\"good first issue\",\"spam\",\"off topic\",\"documentation\",\"question\",\"duplicate\",\"wontfix\",\"needs triage\",\"needs investigation\",\"breaking change\",\"performance\",\"security\",\"refactor\"],\"max\":30,\"target\":\"*\"},\"create_issue\":{\"max\":4},\"create_pull_request\":{\"max\":4},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1},\"push_to_pull_request_branch\":{\"max\":4,\"target\":\"*\"},\"remove_labels\":{\"allowed\":[\"bug\",\"enhancement\",\"help wanted\",\"good first issue\",\"spam\",\"off topic\",\"documentation\",\"question\",\"duplicate\",\"wontfix\",\"needs triage\",\"needs investigation\",\"breaking change\",\"performance\",\"security\",\"refactor\"],\"max\":5},\"update_issue\":{\"max\":1}}\n          GH_AW_SAFE_OUTPUTS_CONFIG_EOF\n          cat > /opt/gh-aw/safeoutputs/tools.json << 'GH_AW_SAFE_OUTPUTS_TOOLS_EOF'\n          [\n            {\n              \"description\": \"Create a new GitHub issue for tracking bugs, feature requests, or tasks. Use this for actionable work items that need assignment, labeling, and status tracking. For reports, announcements, or status updates that don't require task tracking, use create_discussion instead. CONSTRAINTS: Maximum 4 issue(s) can be created. Title will be prefixed with \\\"[Repo Assist] \\\". Labels [automation repo-assist] will be automatically added.\",\n              \"inputSchema\": {\n                \"additionalProperties\": false,\n                \"properties\": {\n                  \"body\": {\n                    \"description\": \"Detailed issue description in Markdown. Do NOT repeat the title as a heading since it already appears as the issue's h1. Include context, reproduction steps, or acceptance criteria as appropriate.\",\n                    \"type\": \"string\"\n                  },\n                  \"labels\": {\n                    \"description\": \"Labels to categorize the issue (e.g., 'bug', 'enhancement'). Labels must exist in the repository.\",\n                    \"items\": {\n                      \"type\": \"string\"\n                    },\n                    \"type\": \"array\"\n                  },\n                  \"parent\": {\n                    \"description\": \"Parent issue number for creating sub-issues. This is the numeric ID from the GitHub URL (e.g., 42 in github.com/owner/repo/issues/42). Can also be a temporary_id (e.g., 'aw_abc123', 'aw_Test123') from a previously created issue in the same workflow run.\",\n                    \"type\": [\n                      \"number\",\n                      \"string\"\n                    ]\n                  },\n                  \"temporary_id\": {\n                    \"description\": \"Unique temporary identifier for referencing this issue before it's created. Format: 'aw_' followed by 3 to 8 alphanumeric characters (e.g., 'aw_abc1', 'aw_Test123'). Use '#aw_ID' in body text to reference other issues by their temporary_id; these are replaced with actual issue numbers after creation.\",\n                    \"pattern\": \"^aw_[A-Za-z0-9]{3,8}$\",\n                    \"type\": \"string\"\n                  },\n                  \"title\": {\n                    \"description\": \"Concise issue title summarizing the bug, feature, or task. The title appears as the main heading, so keep it brief and descriptive.\",\n                    \"type\": \"string\"\n                  }\n                },\n                \"required\": [\n                  \"title\",\n                  \"body\"\n                ],\n                \"type\": \"object\"\n              },\n              \"name\": \"create_issue\"\n            },\n            {\n              \"description\": \"Add a comment to an existing GitHub issue, pull request, or discussion. Use this to provide feedback, answer questions, or add information to an existing conversation. For creating new items, use create_issue, create_discussion, or create_pull_request instead. IMPORTANT: Comments are subject to validation constraints enforced by the MCP server - maximum 65536 characters for the complete comment (including footer which is added automatically), 10 mentions (@username), and 50 links. Exceeding these limits will result in an immediate error with specific guidance. NOTE: By default, this tool requires discussions:write permission. If your GitHub App lacks Discussions permission, set 'discussions: false' in the workflow's safe-outputs.add-comment configuration to exclude this permission. CONSTRAINTS: Maximum 10 comment(s) can be added. Target: *.\",\n              \"inputSchema\": {\n                \"additionalProperties\": false,\n                \"properties\": {\n                  \"body\": {\n                    \"description\": \"The comment text in Markdown format. This is the 'body' field - do not use 'comment_body' or other variations. Provide helpful, relevant information that adds value to the conversation. CONSTRAINTS: The complete comment (your body text + automatically added footer) must not exceed 65536 characters total. Maximum 10 mentions (@username), maximum 50 links (http/https URLs). A footer (~200-500 characters) is automatically appended with workflow attribution, so leave adequate space. If these limits are exceeded, the tool call will fail with a detailed error message indicating which constraint was violated.\",\n                    \"type\": \"string\"\n                  },\n                  \"item_number\": {\n                    \"description\": \"The issue, pull request, or discussion number to comment on. This is the numeric ID from the GitHub URL (e.g., 123 in github.com/owner/repo/issues/123). If omitted, the tool auto-targets the issue, PR, or discussion that triggered this workflow. Auto-targeting only works for issue, pull_request, discussion, and comment event triggers — it does NOT work for schedule, workflow_dispatch, push, or workflow_run triggers. For those trigger types, always provide item_number explicitly, or the comment will be silently discarded.\",\n                    \"type\": \"number\"\n                  }\n                },\n                \"required\": [\n                  \"body\"\n                ],\n                \"type\": \"object\"\n              },\n              \"name\": \"add_comment\"\n            },\n            {\n              \"description\": \"Create a new GitHub pull request to propose code changes. Use this after making file edits to submit them for review and merging. The PR will be created from the current branch with your committed changes. For code review comments on an existing PR, use create_pull_request_review_comment instead. CONSTRAINTS: Maximum 4 pull request(s) can be created. Title will be prefixed with \\\"[Repo Assist] \\\". Labels [automation repo-assist] will be automatically added. PRs will be created as drafts.\",\n              \"inputSchema\": {\n                \"additionalProperties\": false,\n                \"properties\": {\n                  \"body\": {\n                    \"description\": \"Detailed PR description in Markdown. Include what changes were made, why, testing notes, and any breaking changes. Do NOT repeat the title as a heading.\",\n                    \"type\": \"string\"\n                  },\n                  \"branch\": {\n                    \"description\": \"Source branch name containing the changes. If omitted, uses the current working branch.\",\n                    \"type\": \"string\"\n                  },\n                  \"draft\": {\n                    \"description\": \"Whether to create the PR as a draft. Draft PRs cannot be merged until marked as ready for review. Use mark_pull_request_as_ready_for_review to convert a draft PR. Default: true.\",\n                    \"type\": \"boolean\"\n                  },\n                  \"labels\": {\n                    \"description\": \"Labels to categorize the PR (e.g., 'enhancement', 'bugfix'). Labels must exist in the repository.\",\n                    \"items\": {\n                      \"type\": \"string\"\n                    },\n                    \"type\": \"array\"\n                  },\n                  \"title\": {\n                    \"description\": \"Concise PR title describing the changes. Follow repository conventions (e.g., conventional commits). The title appears as the main heading.\",\n                    \"type\": \"string\"\n                  }\n                },\n                \"required\": [\n                  \"title\",\n                  \"body\"\n                ],\n                \"type\": \"object\"\n              },\n              \"name\": \"create_pull_request\"\n            },\n            {\n              \"description\": \"Add labels to an existing GitHub issue or pull request for categorization and filtering. Labels must already exist in the repository. For creating new issues with labels, use create_issue with the labels property instead. CONSTRAINTS: Maximum 30 label(s) can be added. Only these labels are allowed: [bug enhancement help wanted good first issue spam off topic documentation question duplicate wontfix needs triage needs investigation breaking change performance security refactor]. Target: *.\",\n              \"inputSchema\": {\n                \"additionalProperties\": false,\n                \"properties\": {\n                  \"item_number\": {\n                    \"description\": \"Issue or PR number to add labels to. This is the numeric ID from the GitHub URL (e.g., 456 in github.com/owner/repo/issues/456). If omitted, adds labels to the issue or PR that triggered this workflow. Only works for issue or pull_request event triggers. For schedule, workflow_dispatch, or other triggers, item_number is required — omitting it will silently skip the label operation.\",\n                    \"type\": \"number\"\n                  },\n                  \"labels\": {\n                    \"description\": \"Label names to add (e.g., ['bug', 'priority-high']). Labels must exist in the repository.\",\n                    \"items\": {\n                      \"type\": \"string\"\n                    },\n                    \"type\": \"array\"\n                  }\n                },\n                \"type\": \"object\"\n              },\n              \"name\": \"add_labels\"\n            },\n            {\n              \"description\": \"Remove labels from an existing GitHub issue or pull request. Silently skips labels that don't exist on the item. Use this to clean up labels or manage label lifecycles (e.g., removing 'needs-review' after review is complete). CONSTRAINTS: Maximum 5 label(s) can be removed. Only these labels can be removed: [bug enhancement help wanted good first issue spam off topic documentation question duplicate wontfix needs triage needs investigation breaking change performance security refactor]. Target: *.\",\n              \"inputSchema\": {\n                \"additionalProperties\": false,\n                \"properties\": {\n                  \"item_number\": {\n                    \"description\": \"Issue or PR number to remove labels from. This is the numeric ID from the GitHub URL (e.g., 456 in github.com/owner/repo/issues/456). If omitted, removes labels from the item that triggered this workflow.\",\n                    \"type\": \"number\"\n                  },\n                  \"labels\": {\n                    \"description\": \"Label names to remove (e.g., ['smoke', 'needs-triage']). Non-existent labels are silently skipped.\",\n                    \"items\": {\n                      \"type\": \"string\"\n                    },\n                    \"type\": \"array\"\n                  }\n                },\n                \"required\": [\n                  \"labels\"\n                ],\n                \"type\": \"object\"\n              },\n              \"name\": \"remove_labels\"\n            },\n            {\n              \"description\": \"Update an existing GitHub issue's title, body, labels, assignees, or milestone WITHOUT closing it. This tool is primarily for editing issue metadata and content. While it supports changing status between 'open' and 'closed', use close_issue instead when you want to close an issue with a closing comment. Body updates support replacing, appending to, prepending content, or updating a per-run \\\"island\\\" section. CONSTRAINTS: Maximum 1 issue(s) can be updated. Target: *.\",\n              \"inputSchema\": {\n                \"additionalProperties\": false,\n                \"properties\": {\n                  \"assignees\": {\n                    \"description\": \"Replace the issue assignees with this list of GitHub usernames (e.g., ['octocat', 'mona']).\",\n                    \"items\": {\n                      \"type\": \"string\"\n                    },\n                    \"type\": \"array\"\n                  },\n                  \"body\": {\n                    \"description\": \"Issue body content in Markdown. For 'replace', this becomes the entire body. For 'append'/'prepend', this content is added with a separator and an attribution footer. For 'replace-island', only the run-specific section is updated.\",\n                    \"type\": \"string\"\n                  },\n                  \"issue_number\": {\n                    \"description\": \"Issue number to update. This is the numeric ID from the GitHub URL (e.g., 789 in github.com/owner/repo/issues/789). Required when the workflow target is '*' (any issue).\",\n                    \"type\": [\n                      \"number\",\n                      \"string\"\n                    ]\n                  },\n                  \"labels\": {\n                    \"description\": \"Replace the issue labels with this list (e.g., ['bug', 'tracking:foo']). Labels must exist in the repository.\",\n                    \"items\": {\n                      \"type\": \"string\"\n                    },\n                    \"type\": \"array\"\n                  },\n                  \"milestone\": {\n                    \"description\": \"Milestone number to assign (e.g., 1). Use null to clear.\",\n                    \"type\": [\n                      \"number\",\n                      \"string\"\n                    ]\n                  },\n                  \"operation\": {\n                    \"description\": \"How to update the issue body: 'append' (default - add to end with separator), 'prepend' (add to start with separator), 'replace' (overwrite entire body), or 'replace-island' (update a run-specific section).\",\n                    \"enum\": [\n                      \"replace\",\n                      \"append\",\n                      \"prepend\",\n                      \"replace-island\"\n                    ],\n                    \"type\": \"string\"\n                  },\n                  \"status\": {\n                    \"description\": \"New issue status: 'open' to reopen a closed issue, 'closed' to close an open issue.\",\n                    \"enum\": [\n                      \"open\",\n                      \"closed\"\n                    ],\n                    \"type\": \"string\"\n                  },\n                  \"title\": {\n                    \"description\": \"New issue title to replace the existing title.\",\n                    \"type\": \"string\"\n                  }\n                },\n                \"type\": \"object\"\n              },\n              \"name\": \"update_issue\"\n            },\n            {\n              \"description\": \"Push committed changes to a pull request's branch. Use this to add follow-up commits to an existing PR, such as addressing review feedback or fixing issues. Changes must be committed locally before calling this tool. CONSTRAINTS: Maximum 4 push(es) can be made.\",\n              \"inputSchema\": {\n                \"additionalProperties\": false,\n                \"properties\": {\n                  \"branch\": {\n                    \"description\": \"Branch name to push changes from. If omitted, uses the current working branch. Only specify if you need to push from a different branch.\",\n                    \"type\": \"string\"\n                  },\n                  \"message\": {\n                    \"description\": \"Commit message describing the changes. Follow repository commit message conventions (e.g., conventional commits).\",\n                    \"type\": \"string\"\n                  },\n                  \"pull_request_number\": {\n                    \"description\": \"Pull request number to push changes to. This is the numeric ID from the GitHub URL (e.g., 654 in github.com/owner/repo/pull/654). Required when the workflow target is '*' (any PR).\",\n                    \"type\": [\n                      \"number\",\n                      \"string\"\n                    ]\n                  }\n                },\n                \"required\": [\n                  \"message\"\n                ],\n                \"type\": \"object\"\n              },\n              \"name\": \"push_to_pull_request_branch\"\n            },\n            {\n              \"description\": \"Report that a tool or capability needed to complete the task is not available, or share any information you deem important about missing functionality or limitations. Use this when you cannot accomplish what was requested because the required functionality is missing or access is restricted.\",\n              \"inputSchema\": {\n                \"additionalProperties\": false,\n                \"properties\": {\n                  \"alternatives\": {\n                    \"description\": \"Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).\",\n                    \"type\": \"string\"\n                  },\n                  \"reason\": {\n                    \"description\": \"Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).\",\n                    \"type\": \"string\"\n                  },\n                  \"tool\": {\n                    \"description\": \"Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.\",\n                    \"type\": \"string\"\n                  }\n                },\n                \"required\": [\n                  \"reason\"\n                ],\n                \"type\": \"object\"\n              },\n              \"name\": \"missing_tool\"\n            },\n            {\n              \"description\": \"Log a transparency message when no significant actions are needed. Use this to confirm workflow completion and provide visibility when analysis is complete but no changes or outputs are required (e.g., 'No issues found', 'All checks passed'). This ensures the workflow produces human-visible output even when no other actions are taken.\",\n              \"inputSchema\": {\n                \"additionalProperties\": false,\n                \"properties\": {\n                  \"message\": {\n                    \"description\": \"Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').\",\n                    \"type\": \"string\"\n                  }\n                },\n                \"required\": [\n                  \"message\"\n                ],\n                \"type\": \"object\"\n              },\n              \"name\": \"noop\"\n            },\n            {\n              \"description\": \"Report that data or information needed to complete the task is not available. Use this when you cannot accomplish what was requested because required data, context, or information is missing.\",\n              \"inputSchema\": {\n                \"additionalProperties\": false,\n                \"properties\": {\n                  \"alternatives\": {\n                    \"description\": \"Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).\",\n                    \"type\": \"string\"\n                  },\n                  \"context\": {\n                    \"description\": \"Additional context about the missing data or where it should come from (max 256 characters).\",\n                    \"type\": \"string\"\n                  },\n                  \"data_type\": {\n                    \"description\": \"Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.\",\n                    \"type\": \"string\"\n                  },\n                  \"reason\": {\n                    \"description\": \"Explanation of why this data is needed to complete the task (max 256 characters).\",\n                    \"type\": \"string\"\n                  }\n                },\n                \"required\": [],\n                \"type\": \"object\"\n              },\n              \"name\": \"missing_data\"\n            }\n          ]\n          GH_AW_SAFE_OUTPUTS_TOOLS_EOF\n          cat > /opt/gh-aw/safeoutputs/validation.json << 'GH_AW_SAFE_OUTPUTS_VALIDATION_EOF'\n          {\n            \"add_comment\": {\n              \"defaultMax\": 1,\n              \"fields\": {\n                \"body\": {\n                  \"required\": true,\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 65000\n                },\n                \"item_number\": {\n                  \"issueOrPRNumber\": true\n                },\n                \"repo\": {\n                  \"type\": \"string\",\n                  \"maxLength\": 256\n                }\n              }\n            },\n            \"add_labels\": {\n              \"defaultMax\": 5,\n              \"fields\": {\n                \"item_number\": {\n                  \"issueOrPRNumber\": true\n                },\n                \"labels\": {\n                  \"required\": true,\n                  \"type\": \"array\",\n                  \"itemType\": \"string\",\n                  \"itemSanitize\": true,\n                  \"itemMaxLength\": 128\n                },\n                \"repo\": {\n                  \"type\": \"string\",\n                  \"maxLength\": 256\n                }\n              }\n            },\n            \"create_issue\": {\n              \"defaultMax\": 1,\n              \"fields\": {\n                \"body\": {\n                  \"required\": true,\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 65000\n                },\n                \"labels\": {\n                  \"type\": \"array\",\n                  \"itemType\": \"string\",\n                  \"itemSanitize\": true,\n                  \"itemMaxLength\": 128\n                },\n                \"parent\": {\n                  \"issueOrPRNumber\": true\n                },\n                \"repo\": {\n                  \"type\": \"string\",\n                  \"maxLength\": 256\n                },\n                \"temporary_id\": {\n                  \"type\": \"string\"\n                },\n                \"title\": {\n                  \"required\": true,\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 128\n                }\n              }\n            },\n            \"create_pull_request\": {\n              \"defaultMax\": 1,\n              \"fields\": {\n                \"body\": {\n                  \"required\": true,\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 65000\n                },\n                \"branch\": {\n                  \"required\": true,\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 256\n                },\n                \"draft\": {\n                  \"type\": \"boolean\"\n                },\n                \"labels\": {\n                  \"type\": \"array\",\n                  \"itemType\": \"string\",\n                  \"itemSanitize\": true,\n                  \"itemMaxLength\": 128\n                },\n                \"repo\": {\n                  \"type\": \"string\",\n                  \"maxLength\": 256\n                },\n                \"title\": {\n                  \"required\": true,\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 128\n                }\n              }\n            },\n            \"missing_data\": {\n              \"defaultMax\": 20,\n              \"fields\": {\n                \"alternatives\": {\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 256\n                },\n                \"context\": {\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 256\n                },\n                \"data_type\": {\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 128\n                },\n                \"reason\": {\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 256\n                }\n              }\n            },\n            \"missing_tool\": {\n              \"defaultMax\": 20,\n              \"fields\": {\n                \"alternatives\": {\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 512\n                },\n                \"reason\": {\n                  \"required\": true,\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 256\n                },\n                \"tool\": {\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 128\n                }\n              }\n            },\n            \"noop\": {\n              \"defaultMax\": 1,\n              \"fields\": {\n                \"message\": {\n                  \"required\": true,\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 65000\n                }\n              }\n            },\n            \"push_to_pull_request_branch\": {\n              \"defaultMax\": 1,\n              \"fields\": {\n                \"branch\": {\n                  \"required\": true,\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 256\n                },\n                \"message\": {\n                  \"required\": true,\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 65000\n                },\n                \"pull_request_number\": {\n                  \"issueOrPRNumber\": true\n                }\n              }\n            },\n            \"remove_labels\": {\n              \"defaultMax\": 5,\n              \"fields\": {\n                \"item_number\": {\n                  \"issueOrPRNumber\": true\n                },\n                \"labels\": {\n                  \"required\": true,\n                  \"type\": \"array\",\n                  \"itemType\": \"string\",\n                  \"itemSanitize\": true,\n                  \"itemMaxLength\": 128\n                },\n                \"repo\": {\n                  \"type\": \"string\",\n                  \"maxLength\": 256\n                }\n              }\n            },\n            \"update_issue\": {\n              \"defaultMax\": 1,\n              \"fields\": {\n                \"assignees\": {\n                  \"type\": \"array\",\n                  \"itemType\": \"string\",\n                  \"itemSanitize\": true,\n                  \"itemMaxLength\": 39\n                },\n                \"body\": {\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 65000\n                },\n                \"issue_number\": {\n                  \"issueOrPRNumber\": true\n                },\n                \"labels\": {\n                  \"type\": \"array\",\n                  \"itemType\": \"string\",\n                  \"itemSanitize\": true,\n                  \"itemMaxLength\": 128\n                },\n                \"milestone\": {\n                  \"optionalPositiveInteger\": true\n                },\n                \"operation\": {\n                  \"type\": \"string\",\n                  \"enum\": [\n                    \"replace\",\n                    \"append\",\n                    \"prepend\",\n                    \"replace-island\"\n                  ]\n                },\n                \"repo\": {\n                  \"type\": \"string\",\n                  \"maxLength\": 256\n                },\n                \"status\": {\n                  \"type\": \"string\",\n                  \"enum\": [\n                    \"open\",\n                    \"closed\"\n                  ]\n                },\n                \"title\": {\n                  \"type\": \"string\",\n                  \"sanitize\": true,\n                  \"maxLength\": 128\n                }\n              },\n              \"customValidation\": \"requiresOneOf:status,title,body\"\n            }\n          }\n          GH_AW_SAFE_OUTPUTS_VALIDATION_EOF\n      - name: Generate Safe Outputs MCP Server Config\n        id: safe-outputs-config\n        run: |\n          # Generate a secure random API key (360 bits of entropy, 40+ chars)\n          # Mask immediately to prevent timing vulnerabilities\n          API_KEY=$(openssl rand -base64 45 | tr -d '/+=')\n          echo \"::add-mask::${API_KEY}\"\n          \n          PORT=3001\n          \n          # Set outputs for next steps\n          {\n            echo \"safe_outputs_api_key=${API_KEY}\"\n            echo \"safe_outputs_port=${PORT}\"\n          } >> \"$GITHUB_OUTPUT\"\n          \n          echo \"Safe Outputs MCP server will run on port ${PORT}\"\n          \n      - name: Start Safe Outputs MCP HTTP Server\n        id: safe-outputs-start\n        env:\n          DEBUG: '*'\n          GH_AW_SAFE_OUTPUTS_PORT: ${{ steps.safe-outputs-config.outputs.safe_outputs_port }}\n          GH_AW_SAFE_OUTPUTS_API_KEY: ${{ steps.safe-outputs-config.outputs.safe_outputs_api_key }}\n          GH_AW_SAFE_OUTPUTS_TOOLS_PATH: /opt/gh-aw/safeoutputs/tools.json\n          GH_AW_SAFE_OUTPUTS_CONFIG_PATH: /opt/gh-aw/safeoutputs/config.json\n          GH_AW_MCP_LOG_DIR: /tmp/gh-aw/mcp-logs/safeoutputs\n        run: |\n          # Environment variables are set above to prevent template injection\n          export DEBUG\n          export GH_AW_SAFE_OUTPUTS_PORT\n          export GH_AW_SAFE_OUTPUTS_API_KEY\n          export GH_AW_SAFE_OUTPUTS_TOOLS_PATH\n          export GH_AW_SAFE_OUTPUTS_CONFIG_PATH\n          export GH_AW_MCP_LOG_DIR\n          \n          bash /opt/gh-aw/actions/start_safe_outputs_server.sh\n          \n      - name: Start MCP Gateway\n        id: start-mcp-gateway\n        env:\n          GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}\n          GH_AW_SAFE_OUTPUTS_API_KEY: ${{ steps.safe-outputs-start.outputs.api_key }}\n          GH_AW_SAFE_OUTPUTS_PORT: ${{ steps.safe-outputs-start.outputs.port }}\n          GITHUB_MCP_LOCKDOWN: ${{ steps.determine-automatic-lockdown.outputs.lockdown == 'true' && '1' || '0' }}\n          GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}\n        run: |\n          set -eo pipefail\n          mkdir -p /tmp/gh-aw/mcp-config\n          \n          # Export gateway environment variables for MCP config and gateway script\n          export MCP_GATEWAY_PORT=\"80\"\n          export MCP_GATEWAY_DOMAIN=\"host.docker.internal\"\n          MCP_GATEWAY_API_KEY=$(openssl rand -base64 45 | tr -d '/+=')\n          echo \"::add-mask::${MCP_GATEWAY_API_KEY}\"\n          export MCP_GATEWAY_API_KEY\n          export MCP_GATEWAY_PAYLOAD_DIR=\"/tmp/gh-aw/mcp-payloads\"\n          mkdir -p \"${MCP_GATEWAY_PAYLOAD_DIR}\"\n          export DEBUG=\"*\"\n          \n          export GH_AW_ENGINE=\"copilot\"\n          export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '\"${GITHUB_WORKSPACE}\"':'\"${GITHUB_WORKSPACE}\"':rw ghcr.io/github/gh-aw-mcpg:v0.1.5'\n          \n          mkdir -p /home/runner/.copilot\n          cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh\n          {\n            \"mcpServers\": {\n              \"github\": {\n                \"type\": \"stdio\",\n                \"container\": \"ghcr.io/github/github-mcp-server:v0.31.0\",\n                \"env\": {\n                  \"GITHUB_LOCKDOWN_MODE\": \"$GITHUB_MCP_LOCKDOWN\",\n                  \"GITHUB_PERSONAL_ACCESS_TOKEN\": \"\\${GITHUB_MCP_SERVER_TOKEN}\",\n                  \"GITHUB_READ_ONLY\": \"1\",\n                  \"GITHUB_TOOLSETS\": \"all\"\n                }\n              },\n              \"safeoutputs\": {\n                \"type\": \"http\",\n                \"url\": \"http://host.docker.internal:$GH_AW_SAFE_OUTPUTS_PORT\",\n                \"headers\": {\n                  \"Authorization\": \"\\${GH_AW_SAFE_OUTPUTS_API_KEY}\"\n                }\n              }\n            },\n            \"gateway\": {\n              \"port\": $MCP_GATEWAY_PORT,\n              \"domain\": \"${MCP_GATEWAY_DOMAIN}\",\n              \"apiKey\": \"${MCP_GATEWAY_API_KEY}\",\n              \"payloadDir\": \"${MCP_GATEWAY_PAYLOAD_DIR}\"\n            }\n          }\n          GH_AW_MCP_CONFIG_EOF\n      - name: Generate workflow overview\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        with:\n          script: |\n            const { generateWorkflowOverview } = require('/opt/gh-aw/actions/generate_workflow_overview.cjs');\n            await generateWorkflowOverview(core);\n      - name: Download prompt artifact\n        uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6\n        with:\n          name: prompt\n          path: /tmp/gh-aw/aw-prompts\n      - name: Clean git credentials\n        run: bash /opt/gh-aw/actions/clean_git_credentials.sh\n      - name: Execute GitHub Copilot CLI\n        id: agentic_execution\n        # Copilot CLI tool arguments (sorted):\n        timeout-minutes: 60\n        run: |\n          set -o pipefail\n          sudo -E awf --env-all --container-workdir \"${GITHUB_WORKSPACE}\" --allow-domains \"*.jsr.io,*.pythonhosted.org,adoptium.net,anaconda.org,api.adoptium.net,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.foojay.io,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.npms.io,api.nuget.org,api.snapcraft.io,archive.apache.org,archive.ubuntu.com,azure.archive.ubuntu.com,azuresearch-usnc.nuget.org,azuresearch-ussc.nuget.org,binstar.org,bootstrap.pypa.io,builds.dotnet.microsoft.com,bun.sh,cdn.azul.com,cdn.jsdelivr.net,central.sonatype.com,ci.dot.net,conda.anaconda.org,conda.binstar.org,crates.io,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,dc.services.visualstudio.com,deb.nodesource.com,deno.land,dist.nuget.org,dl.google.com,dlcdn.apache.org,dot.net,dotnet.microsoft.com,dotnetcli.blob.core.windows.net,download.eclipse.org,download.java.net,download.oracle.com,downloads.gradle-dn.com,esm.sh,files.pythonhosted.org,get.pnpm.io,github.com,googleapis.deno.dev,googlechromelabs.github.io,gradle.org,host.docker.internal,index.crates.io,jcenter.bintray.com,jdk.java.net,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,maven.apache.org,maven.google.com,maven.oracle.com,maven.pkg.github.com,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,nuget.org,nuget.pkg.github.com,nugetregistryv2prod.blob.core.windows.net,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,oneocsp.microsoft.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,pip.pypa.io,pkgs.dev.azure.com,plugins-artifacts.gradle.org,plugins.gradle.org,ppa.launchpad.net,pypi.org,pypi.python.org,raw.githubusercontent.com,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.anaconda.com,repo.continuum.io,repo.gradle.org,repo.grails.org,repo.maven.apache.org,repo.spring.io,repo.yarnpkg.com,repo1.maven.org,s.symcb.com,s.symcd.com,security.ubuntu.com,services.gradle.org,sh.rustup.rs,skimdb.npmjs.com,static.crates.io,static.rust-lang.org,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.java.com,www.microsoft.com,www.npmjs.com,www.npmjs.org,yarnpkg.com\" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \\\n            -- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir \"${GITHUB_WORKSPACE}\" --disable-builtin-mcps --allow-all-tools --allow-all-paths --prompt \"$(cat /tmp/gh-aw/aw-prompts/prompt.txt)\"${GH_AW_MODEL_AGENT_COPILOT:+ --model \"$GH_AW_MODEL_AGENT_COPILOT\"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log\n        env:\n          COPILOT_AGENT_RUNNER_TYPE: STANDALONE\n          COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}\n          GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json\n          GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}\n          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt\n          GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}\n          GITHUB_HEAD_REF: ${{ github.head_ref }}\n          GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}\n          GITHUB_REF_NAME: ${{ github.ref_name }}\n          GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}\n          GITHUB_WORKSPACE: ${{ github.workspace }}\n          XDG_CONFIG_HOME: /home/runner\n      - name: Configure Git credentials\n        env:\n          REPO_NAME: ${{ github.repository }}\n          SERVER_URL: ${{ github.server_url }}\n        run: |\n          git config --global user.email \"github-actions[bot]@users.noreply.github.com\"\n          git config --global user.name \"github-actions[bot]\"\n          git config --global am.keepcr true\n          # Re-authenticate git with GitHub token\n          SERVER_URL_STRIPPED=\"${SERVER_URL#https://}\"\n          git remote set-url origin \"https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git\"\n          echo \"Git configured with standard GitHub Actions identity\"\n      - name: Copy Copilot session state files to logs\n        if: always()\n        continue-on-error: true\n        run: |\n          # Copy Copilot session state files to logs folder for artifact collection\n          # This ensures they are in /tmp/gh-aw/ where secret redaction can scan them\n          SESSION_STATE_DIR=\"$HOME/.copilot/session-state\"\n          LOGS_DIR=\"/tmp/gh-aw/sandbox/agent/logs\"\n          \n          if [ -d \"$SESSION_STATE_DIR\" ]; then\n            echo \"Copying Copilot session state files from $SESSION_STATE_DIR to $LOGS_DIR\"\n            mkdir -p \"$LOGS_DIR\"\n            cp -v \"$SESSION_STATE_DIR\"/*.jsonl \"$LOGS_DIR/\" 2>/dev/null || true\n            echo \"Session state files copied successfully\"\n          else\n            echo \"No session-state directory found at $SESSION_STATE_DIR\"\n          fi\n      - name: Stop MCP Gateway\n        if: always()\n        continue-on-error: true\n        env:\n          MCP_GATEWAY_PORT: ${{ steps.start-mcp-gateway.outputs.gateway-port }}\n          MCP_GATEWAY_API_KEY: ${{ steps.start-mcp-gateway.outputs.gateway-api-key }}\n          GATEWAY_PID: ${{ steps.start-mcp-gateway.outputs.gateway-pid }}\n        run: |\n          bash /opt/gh-aw/actions/stop_mcp_gateway.sh \"$GATEWAY_PID\"\n      - name: Redact secrets in logs\n        if: always()\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        with:\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/redact_secrets.cjs');\n            await main();\n        env:\n          GH_AW_SECRET_NAMES: 'COPILOT_GITHUB_TOKEN,GH_AW_GITHUB_MCP_SERVER_TOKEN,GH_AW_GITHUB_TOKEN,GITHUB_TOKEN'\n          SECRET_COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}\n          SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}\n          SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}\n          SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n      - name: Upload Safe Outputs\n        if: always()\n        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6\n        with:\n          name: safe-output\n          path: ${{ env.GH_AW_SAFE_OUTPUTS }}\n          if-no-files-found: warn\n      - name: Ingest agent output\n        id: collect_output\n        if: always()\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}\n          GH_AW_ALLOWED_DOMAINS: \"*.jsr.io,*.pythonhosted.org,adoptium.net,anaconda.org,api.adoptium.net,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.foojay.io,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.npms.io,api.nuget.org,api.snapcraft.io,archive.apache.org,archive.ubuntu.com,azure.archive.ubuntu.com,azuresearch-usnc.nuget.org,azuresearch-ussc.nuget.org,binstar.org,bootstrap.pypa.io,builds.dotnet.microsoft.com,bun.sh,cdn.azul.com,cdn.jsdelivr.net,central.sonatype.com,ci.dot.net,conda.anaconda.org,conda.binstar.org,crates.io,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,dc.services.visualstudio.com,deb.nodesource.com,deno.land,dist.nuget.org,dl.google.com,dlcdn.apache.org,dot.net,dotnet.microsoft.com,dotnetcli.blob.core.windows.net,download.eclipse.org,download.java.net,download.oracle.com,downloads.gradle-dn.com,esm.sh,files.pythonhosted.org,get.pnpm.io,github.com,googleapis.deno.dev,googlechromelabs.github.io,gradle.org,host.docker.internal,index.crates.io,jcenter.bintray.com,jdk.java.net,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,maven.apache.org,maven.google.com,maven.oracle.com,maven.pkg.github.com,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,nuget.org,nuget.pkg.github.com,nugetregistryv2prod.blob.core.windows.net,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,oneocsp.microsoft.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,pip.pypa.io,pkgs.dev.azure.com,plugins-artifacts.gradle.org,plugins.gradle.org,ppa.launchpad.net,pypi.org,pypi.python.org,raw.githubusercontent.com,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.anaconda.com,repo.continuum.io,repo.gradle.org,repo.grails.org,repo.maven.apache.org,repo.spring.io,repo.yarnpkg.com,repo1.maven.org,s.symcb.com,s.symcd.com,security.ubuntu.com,services.gradle.org,sh.rustup.rs,skimdb.npmjs.com,static.crates.io,static.rust-lang.org,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.java.com,www.microsoft.com,www.npmjs.com,www.npmjs.org,yarnpkg.com\"\n          GITHUB_SERVER_URL: ${{ github.server_url }}\n          GITHUB_API_URL: ${{ github.api_url }}\n          GH_AW_COMMAND: repo-assist\n        with:\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/collect_ndjson_output.cjs');\n            await main();\n      - name: Upload sanitized agent output\n        if: always() && env.GH_AW_AGENT_OUTPUT\n        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6\n        with:\n          name: agent-output\n          path: ${{ env.GH_AW_AGENT_OUTPUT }}\n          if-no-files-found: warn\n      - name: Upload engine output files\n        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6\n        with:\n          name: agent_outputs\n          path: |\n            /tmp/gh-aw/sandbox/agent/logs/\n            /tmp/gh-aw/redacted-urls.log\n          if-no-files-found: ignore\n      - name: Parse agent logs for step summary\n        if: always()\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_AGENT_OUTPUT: /tmp/gh-aw/sandbox/agent/logs/\n        with:\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/parse_copilot_log.cjs');\n            await main();\n      - name: Parse MCP Gateway logs for step summary\n        if: always()\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        with:\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/parse_mcp_gateway_log.cjs');\n            await main();\n      - name: Print firewall logs\n        if: always()\n        continue-on-error: true\n        env:\n          AWF_LOGS_DIR: /tmp/gh-aw/sandbox/firewall/logs\n        run: |\n          # Fix permissions on firewall logs so they can be uploaded as artifacts\n          # AWF runs with sudo, creating files owned by root\n          sudo chmod -R a+r /tmp/gh-aw/sandbox/firewall/logs 2>/dev/null || true\n          # Only run awf logs summary if awf command exists (it may not be installed if workflow failed before install step)\n          if command -v awf &> /dev/null; then\n            awf logs summary | tee -a \"$GITHUB_STEP_SUMMARY\"\n          else\n            echo 'AWF binary not installed, skipping firewall log summary'\n          fi\n      # Upload repo memory as artifacts for push job\n      - name: Upload repo-memory artifact (default)\n        if: always()\n        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6\n        with:\n          name: repo-memory-default\n          path: /tmp/gh-aw/repo-memory/default\n          retention-days: 1\n          if-no-files-found: ignore\n      - name: Upload agent artifacts\n        if: always()\n        continue-on-error: true\n        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6\n        with:\n          name: agent-artifacts\n          path: |\n            /tmp/gh-aw/aw-prompts/prompt.txt\n            /tmp/gh-aw/aw_info.json\n            /tmp/gh-aw/mcp-logs/\n            /tmp/gh-aw/sandbox/firewall/logs/\n            /tmp/gh-aw/agent-stdio.log\n            /tmp/gh-aw/agent/\n            /tmp/gh-aw/aw-*.patch\n          if-no-files-found: ignore\n      # --- Threat Detection (inline) ---\n      - name: Check if detection needed\n        id: detection_guard\n        if: always()\n        env:\n          OUTPUT_TYPES: ${{ steps.collect_output.outputs.output_types }}\n          HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}\n        run: |\n          if [[ -n \"$OUTPUT_TYPES\" || \"$HAS_PATCH\" == \"true\" ]]; then\n            echo \"run_detection=true\" >> \"$GITHUB_OUTPUT\"\n            echo \"Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH\"\n          else\n            echo \"run_detection=false\" >> \"$GITHUB_OUTPUT\"\n            echo \"Detection skipped: no agent outputs or patches to analyze\"\n          fi\n      - name: Clear MCP configuration for detection\n        if: always() && steps.detection_guard.outputs.run_detection == 'true'\n        run: |\n          rm -f /tmp/gh-aw/mcp-config/mcp-servers.json\n          rm -f /home/runner/.copilot/mcp-config.json\n          rm -f \"$GITHUB_WORKSPACE/.gemini/settings.json\"\n      - name: Prepare threat detection files\n        if: always() && steps.detection_guard.outputs.run_detection == 'true'\n        run: |\n          mkdir -p /tmp/gh-aw/threat-detection/aw-prompts\n          cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true\n          cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true\n          for f in /tmp/gh-aw/aw-*.patch; do\n            [ -f \"$f\" ] && cp \"$f\" /tmp/gh-aw/threat-detection/ 2>/dev/null || true\n          done\n          echo \"Prepared threat detection files:\"\n          ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true\n      - name: Setup threat detection\n        if: always() && steps.detection_guard.outputs.run_detection == 'true'\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          WORKFLOW_NAME: \"Repo Assist\"\n          WORKFLOW_DESCRIPTION: \"A friendly repository assistant that runs daily to support contributors and maintainers.\\nCan also be triggered on-demand via '/repo-assist <instructions>' to perform specific tasks.\\n- Comments helpfully on open issues to unblock contributors and onboard newcomers\\n- Identifies issues that can be fixed and creates draft pull requests with fixes\\n- Studies the codebase and proposes improvements via PRs\\n- Updates its own PRs when CI fails or merge conflicts arise\\n- Nudges stale PRs waiting for author response\\n- Manages issue and PR labels for organization\\n- Prepares releases by updating changelogs and proposing version bumps\\n- Welcomes new contributors with friendly onboarding\\n- Maintains a persistent memory of work done and what remains\\nAlways polite, constructive, and mindful of the project's goals.\"\n          HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}\n        with:\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');\n            await main();\n      - name: Ensure threat-detection directory and log\n        if: always() && steps.detection_guard.outputs.run_detection == 'true'\n        run: |\n          mkdir -p /tmp/gh-aw/threat-detection\n          touch /tmp/gh-aw/threat-detection/detection.log\n      - name: Execute GitHub Copilot CLI\n        if: always() && steps.detection_guard.outputs.run_detection == 'true'\n        id: detection_agentic_execution\n        # Copilot CLI tool arguments (sorted):\n        # --allow-tool shell(cat)\n        # --allow-tool shell(grep)\n        # --allow-tool shell(head)\n        # --allow-tool shell(jq)\n        # --allow-tool shell(ls)\n        # --allow-tool shell(tail)\n        # --allow-tool shell(wc)\n        timeout-minutes: 20\n        run: |\n          set -o pipefail\n          sudo -E awf --env-all --container-workdir \"${GITHUB_WORKSPACE}\" --allow-domains \"api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com\" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \\\n            -- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir \"${GITHUB_WORKSPACE}\" --disable-builtin-mcps --allow-tool '\\''shell(cat)'\\'' --allow-tool '\\''shell(grep)'\\'' --allow-tool '\\''shell(head)'\\'' --allow-tool '\\''shell(jq)'\\'' --allow-tool '\\''shell(ls)'\\'' --allow-tool '\\''shell(tail)'\\'' --allow-tool '\\''shell(wc)'\\'' --prompt \"$(cat /tmp/gh-aw/aw-prompts/prompt.txt)\"${GH_AW_MODEL_DETECTION_COPILOT:+ --model \"$GH_AW_MODEL_DETECTION_COPILOT\"}' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log\n        env:\n          COPILOT_AGENT_RUNNER_TYPE: STANDALONE\n          COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}\n          GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}\n          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt\n          GITHUB_HEAD_REF: ${{ github.head_ref }}\n          GITHUB_REF_NAME: ${{ github.ref_name }}\n          GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}\n          GITHUB_WORKSPACE: ${{ github.workspace }}\n          XDG_CONFIG_HOME: /home/runner\n      - name: Parse threat detection results\n        id: parse_detection_results\n        if: always() && steps.detection_guard.outputs.run_detection == 'true'\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        with:\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');\n            await main();\n      - name: Upload threat detection log\n        if: always() && steps.detection_guard.outputs.run_detection == 'true'\n        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6\n        with:\n          name: threat-detection.log\n          path: /tmp/gh-aw/threat-detection/detection.log\n          if-no-files-found: ignore\n      - name: Set detection conclusion\n        id: detection_conclusion\n        if: always()\n        env:\n          RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }}\n          DETECTION_SUCCESS: ${{ steps.parse_detection_results.outputs.success }}\n        run: |\n          if [[ \"$RUN_DETECTION\" != \"true\" ]]; then\n            echo \"conclusion=skipped\" >> \"$GITHUB_OUTPUT\"\n            echo \"success=true\" >> \"$GITHUB_OUTPUT\"\n            echo \"Detection was not needed, marking as skipped\"\n          elif [[ \"$DETECTION_SUCCESS\" == \"true\" ]]; then\n            echo \"conclusion=success\" >> \"$GITHUB_OUTPUT\"\n            echo \"success=true\" >> \"$GITHUB_OUTPUT\"\n            echo \"Detection passed successfully\"\n          else\n            echo \"conclusion=failure\" >> \"$GITHUB_OUTPUT\"\n            echo \"success=false\" >> \"$GITHUB_OUTPUT\"\n            echo \"Detection found issues\"\n          fi\n\n  conclusion:\n    needs:\n      - activation\n      - agent\n      - push_repo_memory\n      - safe_outputs\n    if: (always()) && (needs.agent.result != 'skipped')\n    runs-on: ubuntu-slim\n    permissions:\n      contents: write\n      discussions: write\n      issues: write\n      pull-requests: write\n    outputs:\n      noop_message: ${{ steps.noop.outputs.noop_message }}\n      tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}\n      total_count: ${{ steps.missing_tool.outputs.total_count }}\n    steps:\n      - name: Setup Scripts\n        uses: github/gh-aw/actions/setup@e32435511ac2c5aa0e08b19284a25dc98fadf1e1 # v0.50.2\n        with:\n          destination: /opt/gh-aw/actions\n      - name: Download agent output artifact\n        continue-on-error: true\n        uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6\n        with:\n          name: agent-output\n          path: /tmp/gh-aw/safeoutputs/\n      - name: Setup agent output environment variable\n        run: |\n          mkdir -p /tmp/gh-aw/safeoutputs/\n          find \"/tmp/gh-aw/safeoutputs/\" -type f -print\n          echo \"GH_AW_AGENT_OUTPUT=/tmp/gh-aw/safeoutputs/agent_output.json\" >> \"$GITHUB_ENV\"\n      - name: Process No-Op Messages\n        id: noop\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}\n          GH_AW_NOOP_MAX: \"1\"\n          GH_AW_WORKFLOW_NAME: \"Repo Assist\"\n          GH_AW_WORKFLOW_SOURCE: \"githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd\"\n          GH_AW_WORKFLOW_SOURCE_URL: \"${{ github.server_url }}/githubnext/agentics/tree/ee49512da7887942965ac0a0e48357106313c9dd/workflows/repo-assist.md\"\n        with:\n          github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/noop.cjs');\n            await main();\n      - name: Record Missing Tool\n        id: missing_tool\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}\n          GH_AW_WORKFLOW_NAME: \"Repo Assist\"\n          GH_AW_WORKFLOW_SOURCE: \"githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd\"\n          GH_AW_WORKFLOW_SOURCE_URL: \"${{ github.server_url }}/githubnext/agentics/tree/ee49512da7887942965ac0a0e48357106313c9dd/workflows/repo-assist.md\"\n        with:\n          github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/missing_tool.cjs');\n            await main();\n      - name: Handle Agent Failure\n        id: handle_agent_failure\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}\n          GH_AW_WORKFLOW_NAME: \"Repo Assist\"\n          GH_AW_WORKFLOW_SOURCE: \"githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd\"\n          GH_AW_WORKFLOW_SOURCE_URL: \"${{ github.server_url }}/githubnext/agentics/tree/ee49512da7887942965ac0a0e48357106313c9dd/workflows/repo-assist.md\"\n          GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}\n          GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}\n          GH_AW_WORKFLOW_ID: \"repo-assist\"\n          GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.agent.outputs.secret_verification_result }}\n          GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}\n          GH_AW_CODE_PUSH_FAILURE_ERRORS: ${{ needs.safe_outputs.outputs.code_push_failure_errors }}\n          GH_AW_CODE_PUSH_FAILURE_COUNT: ${{ needs.safe_outputs.outputs.code_push_failure_count }}\n          GH_AW_REPO_MEMORY_VALIDATION_FAILED_default: ${{ needs.push_repo_memory.outputs.validation_failed_default }}\n          GH_AW_REPO_MEMORY_VALIDATION_ERROR_default: ${{ needs.push_repo_memory.outputs.validation_error_default }}\n          GH_AW_GROUP_REPORTS: \"false\"\n        with:\n          github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/handle_agent_failure.cjs');\n            await main();\n      - name: Handle No-Op Message\n        id: handle_noop_message\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}\n          GH_AW_WORKFLOW_NAME: \"Repo Assist\"\n          GH_AW_WORKFLOW_SOURCE: \"githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd\"\n          GH_AW_WORKFLOW_SOURCE_URL: \"${{ github.server_url }}/githubnext/agentics/tree/ee49512da7887942965ac0a0e48357106313c9dd/workflows/repo-assist.md\"\n          GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}\n          GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}\n          GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}\n          GH_AW_NOOP_REPORT_AS_ISSUE: \"true\"\n        with:\n          github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/handle_noop_message.cjs');\n            await main();\n      - name: Handle Create Pull Request Error\n        id: handle_create_pr_error\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}\n          GH_AW_WORKFLOW_NAME: \"Repo Assist\"\n          GH_AW_WORKFLOW_SOURCE: \"githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd\"\n          GH_AW_WORKFLOW_SOURCE_URL: \"${{ github.server_url }}/githubnext/agentics/tree/ee49512da7887942965ac0a0e48357106313c9dd/workflows/repo-assist.md\"\n          GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}\n        with:\n          github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/handle_create_pr_error.cjs');\n            await main();\n\n  pre_activation:\n    if: >\n      ((github.event_name == 'issues' || github.event_name == 'issue_comment' || github.event_name == 'pull_request' ||\n      github.event_name == 'pull_request_review_comment' || github.event_name == 'discussion' || github.event_name == 'discussion_comment') &&\n      ((github.event_name == 'issues') && ((startsWith(github.event.issue.body, '/repo-assist ')) || (github.event.issue.body == '/repo-assist')) ||\n      (github.event_name == 'issue_comment') && (((startsWith(github.event.comment.body, '/repo-assist ')) ||\n      (github.event.comment.body == '/repo-assist')) && (github.event.issue.pull_request == null)) || (github.event_name == 'issue_comment') &&\n      (((startsWith(github.event.comment.body, '/repo-assist ')) || (github.event.comment.body == '/repo-assist')) &&\n      (github.event.issue.pull_request != null)) || (github.event_name == 'pull_request_review_comment') &&\n      ((startsWith(github.event.comment.body, '/repo-assist ')) || (github.event.comment.body == '/repo-assist')) ||\n      (github.event_name == 'pull_request') && ((startsWith(github.event.pull_request.body, '/repo-assist ')) ||\n      (github.event.pull_request.body == '/repo-assist')) || (github.event_name == 'discussion') && ((startsWith(github.event.discussion.body, '/repo-assist ')) ||\n      (github.event.discussion.body == '/repo-assist')) || (github.event_name == 'discussion_comment') && ((startsWith(github.event.comment.body, '/repo-assist ')) ||\n      (github.event.comment.body == '/repo-assist')))) || (!(github.event_name == 'issues' || github.event_name == 'issue_comment' ||\n      github.event_name == 'pull_request' || github.event_name == 'pull_request_review_comment' || github.event_name == 'discussion' ||\n      github.event_name == 'discussion_comment'))\n    runs-on: ubuntu-slim\n    permissions:\n      discussions: write\n      issues: write\n      pull-requests: write\n    outputs:\n      activated: ${{ (steps.check_membership.outputs.is_team_member == 'true') && (steps.check_command_position.outputs.command_position_ok == 'true') }}\n      matched_command: ${{ steps.check_command_position.outputs.matched_command }}\n    steps:\n      - name: Setup Scripts\n        uses: github/gh-aw/actions/setup@e32435511ac2c5aa0e08b19284a25dc98fadf1e1 # v0.50.2\n        with:\n          destination: /opt/gh-aw/actions\n      - name: Add eyes reaction for immediate feedback\n        id: react\n        if: github.event_name == 'issues' || github.event_name == 'issue_comment' || github.event_name == 'pull_request_review_comment' || github.event_name == 'discussion' || github.event_name == 'discussion_comment' || (github.event_name == 'pull_request') && (github.event.pull_request.head.repo.id == github.repository_id)\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_REACTION: \"eyes\"\n        with:\n          github-token: ${{ secrets.GITHUB_TOKEN }}\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/add_reaction.cjs');\n            await main();\n      - name: Check team membership for command workflow\n        id: check_membership\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_REQUIRED_ROLES: admin,maintainer,write\n        with:\n          github-token: ${{ secrets.GITHUB_TOKEN }}\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/check_membership.cjs');\n            await main();\n      - name: Check command position\n        id: check_command_position\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_COMMANDS: \"[\\\"repo-assist\\\"]\"\n        with:\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/check_command_position.cjs');\n            await main();\n\n  push_repo_memory:\n    needs: agent\n    if: always() && needs.agent.outputs.detection_success == 'true'\n    runs-on: ubuntu-latest\n    permissions:\n      contents: write\n    outputs:\n      validation_error_default: ${{ steps.push_repo_memory_default.outputs.validation_error }}\n      validation_failed_default: ${{ steps.push_repo_memory_default.outputs.validation_failed }}\n    steps:\n      - name: Setup Scripts\n        uses: github/gh-aw/actions/setup@e32435511ac2c5aa0e08b19284a25dc98fadf1e1 # v0.50.2\n        with:\n          destination: /opt/gh-aw/actions\n      - name: Checkout repository\n        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n        with:\n          persist-credentials: false\n          sparse-checkout: .\n      - name: Configure Git credentials\n        env:\n          REPO_NAME: ${{ github.repository }}\n          SERVER_URL: ${{ github.server_url }}\n        run: |\n          git config --global user.email \"github-actions[bot]@users.noreply.github.com\"\n          git config --global user.name \"github-actions[bot]\"\n          git config --global am.keepcr true\n          # Re-authenticate git with GitHub token\n          SERVER_URL_STRIPPED=\"${SERVER_URL#https://}\"\n          git remote set-url origin \"https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git\"\n          echo \"Git configured with standard GitHub Actions identity\"\n      - name: Download repo-memory artifact (default)\n        uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6\n        continue-on-error: true\n        with:\n          name: repo-memory-default\n          path: /tmp/gh-aw/repo-memory/default\n      - name: Push repo-memory changes (default)\n        id: push_repo_memory_default\n        if: always()\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_TOKEN: ${{ github.token }}\n          GITHUB_RUN_ID: ${{ github.run_id }}\n          GITHUB_SERVER_URL: ${{ github.server_url }}\n          ARTIFACT_DIR: /tmp/gh-aw/repo-memory/default\n          MEMORY_ID: default\n          TARGET_REPO: ${{ github.repository }}\n          BRANCH_NAME: memory/repo-assist\n          MAX_FILE_SIZE: 10240\n          MAX_FILE_COUNT: 100\n          MAX_PATCH_SIZE: 10240\n          ALLOWED_EXTENSIONS: '[]'\n        with:\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/push_repo_memory.cjs');\n            await main();\n\n  safe_outputs:\n    needs:\n      - activation\n      - agent\n    if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.agent.outputs.detection_success == 'true')\n    runs-on: ubuntu-slim\n    permissions:\n      contents: write\n      discussions: write\n      issues: write\n      pull-requests: write\n    timeout-minutes: 15\n    env:\n      GH_AW_ENGINE_ID: \"copilot\"\n      GH_AW_WORKFLOW_ID: \"repo-assist\"\n      GH_AW_WORKFLOW_NAME: \"Repo Assist\"\n      GH_AW_WORKFLOW_SOURCE: \"githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd\"\n      GH_AW_WORKFLOW_SOURCE_URL: \"${{ github.server_url }}/githubnext/agentics/tree/ee49512da7887942965ac0a0e48357106313c9dd/workflows/repo-assist.md\"\n    outputs:\n      code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}\n      code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}\n      create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }}\n      create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }}\n      process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }}\n      process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}\n    steps:\n      - name: Setup Scripts\n        uses: github/gh-aw/actions/setup@e32435511ac2c5aa0e08b19284a25dc98fadf1e1 # v0.50.2\n        with:\n          destination: /opt/gh-aw/actions\n      - name: Download agent output artifact\n        continue-on-error: true\n        uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6\n        with:\n          name: agent-output\n          path: /tmp/gh-aw/safeoutputs/\n      - name: Setup agent output environment variable\n        run: |\n          mkdir -p /tmp/gh-aw/safeoutputs/\n          find \"/tmp/gh-aw/safeoutputs/\" -type f -print\n          echo \"GH_AW_AGENT_OUTPUT=/tmp/gh-aw/safeoutputs/agent_output.json\" >> \"$GITHUB_ENV\"\n      - name: Download patch artifact\n        continue-on-error: true\n        uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6\n        with:\n          name: agent-artifacts\n          path: /tmp/gh-aw/\n      - name: Checkout repository\n        if: (((!cancelled()) && (needs.agent.result != 'skipped')) && (contains(needs.agent.outputs.output_types, 'create_pull_request'))) || (((!cancelled()) && (needs.agent.result != 'skipped')) && (contains(needs.agent.outputs.output_types, 'push_to_pull_request_branch')))\n        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2\n        with:\n          ref: ${{ github.base_ref || github.ref_name }}\n          token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}\n          persist-credentials: false\n          fetch-depth: 1\n      - name: Configure Git credentials\n        if: (((!cancelled()) && (needs.agent.result != 'skipped')) && (contains(needs.agent.outputs.output_types, 'create_pull_request'))) || (((!cancelled()) && (needs.agent.result != 'skipped')) && (contains(needs.agent.outputs.output_types, 'push_to_pull_request_branch')))\n        env:\n          REPO_NAME: ${{ github.repository }}\n          SERVER_URL: ${{ github.server_url }}\n          GIT_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}\n        run: |\n          git config --global user.email \"github-actions[bot]@users.noreply.github.com\"\n          git config --global user.name \"github-actions[bot]\"\n          git config --global am.keepcr true\n          # Re-authenticate git with GitHub token\n          SERVER_URL_STRIPPED=\"${SERVER_URL#https://}\"\n          git remote set-url origin \"https://x-access-token:${GIT_TOKEN}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git\"\n          echo \"Git configured with standard GitHub Actions identity\"\n      - name: Process Safe Outputs\n        id: process_safe_outputs\n        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8\n        env:\n          GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}\n          GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: \"{\\\"add_comment\\\":{\\\"hide_older_comments\\\":true,\\\"max\\\":10,\\\"target\\\":\\\"*\\\"},\\\"add_labels\\\":{\\\"allowed\\\":[\\\"bug\\\",\\\"enhancement\\\",\\\"help wanted\\\",\\\"good first issue\\\",\\\"spam\\\",\\\"off topic\\\",\\\"documentation\\\",\\\"question\\\",\\\"duplicate\\\",\\\"wontfix\\\",\\\"needs triage\\\",\\\"needs investigation\\\",\\\"breaking change\\\",\\\"performance\\\",\\\"security\\\",\\\"refactor\\\"],\\\"max\\\":30,\\\"target\\\":\\\"*\\\"},\\\"create_issue\\\":{\\\"labels\\\":[\\\"automation\\\",\\\"repo-assist\\\"],\\\"max\\\":4,\\\"title_prefix\\\":\\\"[Repo Assist] \\\"},\\\"create_pull_request\\\":{\\\"base_branch\\\":\\\"${{ github.base_ref || github.ref_name }}\\\",\\\"draft\\\":true,\\\"labels\\\":[\\\"automation\\\",\\\"repo-assist\\\"],\\\"max\\\":4,\\\"max_patch_size\\\":1024,\\\"title_prefix\\\":\\\"[Repo Assist] \\\"},\\\"missing_data\\\":{},\\\"missing_tool\\\":{},\\\"push_to_pull_request_branch\\\":{\\\"base_branch\\\":\\\"${{ github.base_ref || github.ref_name }}\\\",\\\"if_no_changes\\\":\\\"warn\\\",\\\"max\\\":4,\\\"max_patch_size\\\":1024,\\\"target\\\":\\\"*\\\",\\\"title_prefix\\\":\\\"[Repo Assist] \\\"},\\\"remove_labels\\\":{\\\"allowed\\\":[\\\"bug\\\",\\\"enhancement\\\",\\\"help wanted\\\",\\\"good first issue\\\",\\\"spam\\\",\\\"off topic\\\",\\\"documentation\\\",\\\"question\\\",\\\"duplicate\\\",\\\"wontfix\\\",\\\"needs triage\\\",\\\"needs investigation\\\",\\\"breaking change\\\",\\\"performance\\\",\\\"security\\\",\\\"refactor\\\"],\\\"max\\\":5,\\\"target\\\":\\\"*\\\"},\\\"update_issue\\\":{\\\"allow_body\\\":true,\\\"max\\\":1,\\\"target\\\":\\\"*\\\",\\\"title_prefix\\\":\\\"[Repo Assist] \\\"}}\"\n          GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }}\n        with:\n          github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}\n          script: |\n            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');\n            setupGlobals(core, github, context, exec, io);\n            const { main } = require('/opt/gh-aw/actions/safe_output_handler_manager.cjs');\n            await main();\n      - name: Upload safe output items manifest\n        if: always()\n        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6\n        with:\n          name: safe-output-items\n          path: /tmp/safe-output-items.jsonl\n          if-no-files-found: warn\n\n"
  },
  {
    "path": ".github/workflows/repo-assist.md",
    "content": "---\ndescription: |\n  A friendly repository assistant that runs daily to support contributors and maintainers.\n  Can also be triggered on-demand via '/repo-assist <instructions>' to perform specific tasks.\n  - Comments helpfully on open issues to unblock contributors and onboard newcomers\n  - Identifies issues that can be fixed and creates draft pull requests with fixes\n  - Studies the codebase and proposes improvements via PRs\n  - Updates its own PRs when CI fails or merge conflicts arise\n  - Nudges stale PRs waiting for author response\n  - Manages issue and PR labels for organization\n  - Prepares releases by updating changelogs and proposing version bumps\n  - Welcomes new contributors with friendly onboarding\n  - Maintains a persistent memory of work done and what remains\n  Always polite, constructive, and mindful of the project's goals.\n\non:\n  schedule: daily\n  workflow_dispatch:\n  slash_command:\n    name: repo-assist\n  reaction: \"eyes\"\n\ntimeout-minutes: 60\n\npermissions: read-all\n\nnetwork:\n  allowed:\n  - defaults\n  - dotnet\n  - node\n  - python\n  - rust\n  - java\n\nsafe-outputs:\n  add-comment:\n    max: 10\n    target: \"*\"\n    hide-older-comments: true\n  create-pull-request:\n    draft: true\n    title-prefix: \"[Repo Assist] \"\n    labels: [automation, repo-assist]\n    max: 4\n  push-to-pull-request-branch:\n    target: \"*\"\n    title-prefix: \"[Repo Assist] \"\n    max: 4\n  create-issue:\n    title-prefix: \"[Repo Assist] \"\n    labels: [automation, repo-assist]\n    max: 4\n  update-issue:\n    target: \"*\"\n    title-prefix: \"[Repo Assist] \"\n    max: 1\n  add-labels:\n    allowed: [bug, enhancement, \"help wanted\", \"good first issue\", \"spam\", \"off topic\", documentation, question, duplicate, wontfix, \"needs triage\", \"needs investigation\", \"breaking change\", performance, security, refactor]\n    max: 30\n    target: \"*\" \n  remove-labels:\n    allowed: [bug, enhancement, \"help wanted\", \"good first issue\", \"spam\", \"off topic\", documentation, question, duplicate, wontfix, \"needs triage\", \"needs investigation\", \"breaking change\", performance, security, refactor]\n    max: 5\n    target: \"*\" \n\ntools:\n  web-fetch:\n  github:\n    toolsets: [all]\n  bash: true\n  repo-memory: true\n\nsource: githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd\nengine: copilot\n---\n\n# Repo Assist\n\n## Command Mode\n\nTake heed of **instructions**: \"${{ steps.sanitized.outputs.text }}\"\n\nIf these are non-empty (not \"\"), then you have been triggered via `/repo-assist <instructions>`. Follow the user's instructions instead of the normal scheduled workflow. Focus exclusively on those instructions. Apply all the same guidelines (read AGENTS.md, run formatters/linters/tests, be polite, use AI disclosure). Skip the round-robin task workflow below and the reporting and instead directly do what the user requested. If no specific instructions were provided (empty or blank), proceed with the normal scheduled workflow below. \n\nThen exit  -  do not run the normal workflow after completing the instructions.\n\n## Non-Command Mode\n\nYou are Repo Assist for `${{ github.repository }}`. Your job is to support human contributors, help onboard newcomers, identify improvements, and fix bugs by creating pull requests. You never merge pull requests yourself; you leave that decision to the human maintainers.\n\nAlways be:\n\n- **Polite and encouraging**: Every contributor deserves respect. Use warm, inclusive language.\n- **Concise**: Keep comments focused and actionable. Avoid walls of text.\n- **Mindful of project values**: Prioritize **stability**, **correctness**, and **minimal dependencies**. Do not introduce new dependencies without clear justification.\n- **Transparent about your nature**: Always clearly identify yourself as Repo Assist, an automated AI assistant. Never pretend to be a human maintainer.\n- **Restrained**: When in doubt, do nothing. It is always better to stay silent than to post a redundant, unhelpful, or spammy comment. Human maintainers' attention is precious  -  do not waste it.\n\n## Memory\n\nUse persistent repo memory to track:\n\n- issues already commented on (with timestamps to detect new human activity)\n- fix attempts and outcomes, improvement ideas already submitted, a short to-do list\n- a **backlog cursor** so each run continues where the previous one left off\n- **which tasks were last run** (with timestamps) to support round-robin scheduling\n- the last time you performed certain periodic tasks (dependency updates, release preparation) to enforce frequency limits\n- previously checked off items (checked off by maintainer) in the Monthly Activity Summary to maintain an accurate pending actions list for maintainers\n\nRead memory at the **start** of every run; update it at the **end**.\n\n**Important**: Memory may not be 100% accurate. Issues may have been created, closed, or commented on; PRs may have been created, merged, commented on, or closed since the last run. Always verify memory against current repository state  -  reviewing recent activity since your last run is wise before acting on stale assumptions.\n\n## Workflow\n\nUse a **round-robin strategy**: each run, work on a different subset of tasks, rotating through them across runs so that all tasks get attention over time. Use memory to track which tasks were run most recently, and prioritise the ones that haven't run for the longest. Aim to do 2–4 tasks per run (plus the mandatory Task 11).\n\nAlways do Task 11 (Update Monthly Activity Summary Issue) every run. In all comments and PR descriptions, identify yourself as \"Repo Assist\".\n\n### Task 1: Triage and Comment on Open Issues\n\n1. List open issues sorted by creation date ascending (oldest first). Resume from your memory's backlog cursor; reset when you reach the end.\n2. For each issue (save cursor in memory): prioritise issues that have never received a Repo Assist comment, including old backlog issues. Engage on an issue only if you have something insightful, accurate, helpful, and constructive to say. Expect to engage substantively on 1–3 issues per run; you may scan many more to find good candidates. Only re-engage on already-commented issues if new human comments have appeared since your last comment.\n3. Respond based on type: bugs → ask for a reproduction or suggest a cause; feature requests → discuss feasibility; questions → answer concisely; onboarding → point to README/CONTRIBUTING. Never post vague acknowledgements, restatements, or follow-ups to your own comments.\n4. Begin every comment with: `🤖 *This is an automated response from Repo Assist.*`\n5. Update memory with comments made and the new cursor position.\n\n### Task 2: Fix Issues via Pull Requests\n\n**Only attempt fixes you are confident about.**\n\n1. Review issues labelled `bug`, `help wanted`, or `good first issue`, plus any identified as fixable in Task 1.\n2. For each fixable issue:\n   a. Check memory  -  skip if you've already tried. Never create duplicate PRs.\n   b. Create a fresh branch off `main`: `repo-assist/fix-issue-<N>-<desc>`.\n   c. Implement a minimal, surgical fix. Do not refactor unrelated code.\n   d. **Build and test (required)**: do not create a PR if the build fails or tests fail due to your changes. If tests fail due to infrastructure, create the PR but document it.\n   e. Add a test for the bug if feasible; re-run tests.\n   f. Create a draft PR with: AI disclosure, `Closes #N`, root cause, fix rationale, trade-offs, and a Test Status section showing build/test outcome.\n   g. Post a single brief comment on the issue linking to the PR.\n3. Update memory with fix attempts and outcomes.\n\n### Task 3: Study the Codebase and Propose Improvements\n\n**Be highly selective  -  only propose clearly beneficial, low-risk improvements.**\n\n1. Check memory for already-submitted ideas; do not re-propose them.\n2. Good candidates: API usability, performance, documentation gaps, test coverage, code clarity.\n3. Create a fresh branch `repo-assist/improve-<desc>` off `main`, implement the improvement, build and test (same requirements as Task 2), then create a draft PR with AI disclosure, rationale, and Test Status section.\n4. If not ready to implement, file an issue and note it in memory.\n5. Update memory.\n\n### Task 4: Update Dependencies and Engineering\n\n1. Check for outdated dependencies. Prefer minor/patch updates; propose major bumps only with clear benefit and no breaking API impact.\n2. Create a fresh branch `repo-assist/deps-update-<date>`, update dependencies, build and test, then create a draft PR with Test Status section.\n3. **Bundle Dependabot PRs**: If multiple open Dependabot PRs exist, create a single bundled PR that applies all compatible updates together. Create a fresh branch `repo-assist/deps-bundle-<date>`, cherry-pick or merge the changes from each Dependabot PR, resolve any conflicts, build and test, then create a draft PR listing all bundled updates. Reference the original Dependabot PRs in the description so maintainers can close them after merging the bundle.\n4. Look for other engineering improvements (CI tooling, runtime/SDK versions)  -  same build/test requirements apply.\n5. Update memory with what was checked and when.\n\n### Task 5: Maintain Repo Assist Pull Requests\n\n1. List all open PRs with the `[Repo Assist]` title prefix.\n2. For each PR: fix CI failures caused by your changes by pushing updates; resolve merge conflicts. If you've retried multiple times without success, comment and leave for human review.\n3. Do not push updates for infrastructure-only failures  -  comment instead.\n4. Update memory.\n\n### Task 6: Stale PR Nudges\n\n1. List open PRs not updated in 14+ days.\n2. For each (check memory  -  skip if already nudged): if the PR is waiting on the author, post a single polite comment asking if they need help or want to hand off. Do not comment if the PR is waiting on a maintainer.\n3. **Maximum 3 nudges per run.** Update memory.\n\n### Task 7: Manage Labels\n\nProcess as many issues and PRs as possible each run. Resume from memory's backlog cursor.\n\nFor each item, apply the best-fitting labels from: `bug`, `enhancement`, `help wanted`, `good first issue`, `documentation`, `question`, `duplicate`, `wontfix`, `spam`, `off topic`, `needs triage`, `needs investigation`, `breaking change`, `performance`, `security`, `refactor`. Remove misapplied labels. Apply multiple where appropriate; skip any you're not confident about. After labeling, post a comment if you have something genuinely useful to say.\n\nUpdate memory with labels applied and cursor position.\n\n### Task 8: Release Preparation\n\n1. Find merged PRs since the last release (check changelog or release tags).\n2. If significant unreleased changes exist, determine the version bump (patch/minor/major  -  never propose major without maintainer approval), create a fresh branch `repo-assist/release-vX.Y.Z`, update the changelog, and create a draft PR with AI disclosure and Test Status section.\n3. Skip if: no meaningful changes, a release PR is already open, or you recently proposed one.\n4. Update memory.\n\n### Task 9: Welcome New Contributors\n\n1. List PRs and issues opened in the last 24 hours. Check memory  -  do not welcome the same person twice.\n2. For first-time contributors, post a warm welcome with links to README and CONTRIBUTING.\n3. **Maximum 3 welcomes per run.** Update memory.\n\n### Task 10: Take the Repository Forward\n\nProactively move the repository forward. Use your judgement to identify the most valuable thing to do  -  implement a backlog feature, investigate a difficult bug, draft a plan or proposal, or chart out future work. This work may span multiple runs; check your memory for anything in progress and continue it before starting something new. Record progress and next steps in memory at the end of each run.\n\n### Task 11: Update Monthly Activity Summary Issue (ALWAYS DO THIS TASK IN ADDITION TO OTHERS)\n\nMaintain a single open issue titled `[Repo Assist] Monthly Activity {YYYY}-{MM}` as a rolling summary of all Repo Assist activity for the current month.\n\n1. Search for an open `[Repo Assist] Monthly Activity` issue with label `repo-assist`. If it's for the current month, update it. If for a previous month, close it and create a new one. Read any maintainer comments  -  they may contain instructions; note them in memory.\n2. **Issue body format**  -  use **exactly** this structure:\n\n   ```markdown\n   🤖 *Repo Assist here  -  I'm an automated AI assistant for this repository.*\n\n   ## Activity for <Month Year>\n\n   ## Suggested Actions for Maintainer\n\n   **Comprehensive list** of all pending actions requiring maintainer attention (excludes items already actioned and checked off). \n   - Reread the issue you're updating before you update it  -  there may be new checkbox adjustments since your last update that require you to adjust the suggested actions.\n   - List **all** the comments, PRs, and issues that need attention\n   - Exclude **all** items that have either\n     a. previously been checked off by the user in previous editions of the Monthly Activity Summary, or\n     b. the items linked are closed/merged\n   - Use memory to keep track items checked off by user.\n   - Be concise  -  one line per item., repeating the format lines as necessary:\n\n   * [ ] **Review PR** #<number>: <summary>  -  [Review](<link>)\n   * [ ] **Check comment** #<number>: Repo Assist commented  -  verify guidance is helpful  -  [View](<link>)\n   * [ ] **Merge PR** #<number>: <reason>  -  [Review](<link>)\n   * [ ] **Close issue** #<number>: <reason>  -  [View](<link>)\n   * [ ] **Close PR** #<number>: <reason>  -  [View](<link>)\n   * [ ] **Define goal**: <suggestion>  -  [Related issue](<link>)\n\n   *(If no actions needed, state \"No suggested actions at this time.\")*\n\n   ## Future Work for Repo Assist\n\n   {List future work for Repo Assist}\n\n   *(If nothing pending, skip this section.)*\n\n   ## Run History\n\n   ### <YYYY-MM-DD HH:MM UTC>  -  [Run](<https://github.com/<repo>/actions/runs/<run-id>>)\n   - 💬 Commented on #<number>: <short description>\n   - 🔧 Created PR #<number>: <short description>\n   - 🏷️ Labelled #<number> with `<label>`\n   - 📝 Created issue #<number>: <short description>\n\n   ### <YYYY-MM-DD HH:MM UTC>  -  [Run](<https://github.com/<repo>/actions/runs/<run-id>>)\n   - 🔄 Updated PR #<number>: <short description>\n   - 💬 Commented on PR #<number>: <short description>\n   ```\n\n3. **Format enforcement (MANDATORY)**:\n   - Always use the exact format above. If the existing body uses a different format, rewrite it entirely.\n   - **Suggested Actions comes first**, immediately after the month heading, so maintainers see the action list without scrolling.\n   - **Run History is in reverse chronological order**  -  prepend each new run's entry at the top of the Run History section so the most recent activity appears first.\n   - **Each run heading includes the date, time (UTC), and a link** to the GitHub Actions run: `### YYYY-MM-DD HH:MM UTC  -  [Run](https://github.com/<repo>/actions/runs/<run-id>)`. Use `${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}` for the current run's link.\n   - **Actively remove completed items** from \"Suggested Actions\"  -  do not tick them `[x]`; delete the line when actioned. The checklist contains only pending items.\n   - Use `* [ ]` checkboxes in \"Suggested Actions\". Never use plain bullets there.\n4. **Comprehensive suggested actions**: The \"Suggested Actions for Maintainer\" section must be a **complete list** of all pending items requiring maintainer attention, including:\n   - All open Repo Assist PRs needing review or merge\n   - **All Repo Assist comments** that haven't been acknowledged by a maintainer (use \"Check comment\" for each)\n   - Issues that should be closed (duplicates, resolved, etc.)\n   - PRs that should be closed (stale, superseded, etc.)\n   - Any strategic suggestions (goals, priorities)\n   Use repo memory and the activity log to compile this list. Include direct links for every item. Keep entries to one line each.\n5. Do not update the activity issue if nothing was done in the current run.\n\n## Guidelines\n\n- **No breaking changes** without maintainer approval via a tracked issue.\n- **No new dependencies** without discussion in an issue first.\n- **Small, focused PRs**  -  one concern per PR.\n- **Read AGENTS.md first**: before starting work on any pull request, read the repository's `AGENTS.md` file (if present) to understand project-specific conventions, coding standards, and contribution requirements.\n- **Build, format, lint, and test before every PR**: run any code formatting, linting, and testing checks configured in the repository. Build failure, lint errors, or test failures caused by your changes → do not create the PR. Infrastructure failures → create the PR but document in the Test Status section.\n- **Respect existing style**  -  match code formatting and naming conventions.\n- **AI transparency**: every comment, PR, and issue must include a Repo Assist disclosure with 🤖.\n- **Anti-spam**: no repeated or follow-up comments to yourself in a single run; re-engage only when new human comments have appeared.\n- **Systematic**: use the backlog cursor to process oldest issues first over successive runs. Do not stop early.\n- **Quality over quantity**: noise erodes trust. Do nothing rather than add low-value output."
  },
  {
    "path": ".gitignore",
    "content": "# See https://www.dartlang.org/guides/libraries/private-files\n\n# Files and directories created by pub\n.dart_tool/\n.packages\nbuild/\n.DS_Store\n# If you're building an application, you may want to check-in your pubspec.lock\npubspec.lock\n\n.vscode/\n# Directory created by dartdoc\n# If you don't generate documentation locally you can remove this line.\ndoc/api/\n\n# Avoid committing generated Javascript files:\n*.dart.js\n*.info.json      # Produced by the --dump-info flag.\n*.js             # When generated by dart2js. Don't specify *.js if your\n                 # project includes source files written in JavaScript.\n*.js_\n*.js.deps\n*.js.map\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "## 3.0.6\nBug fixes and performance improvements\n\n## 3.0.5\nBug fixes and performance improvements\n\n## 3.0.2\nBug fixes and performance improvements\n\n## 3.0.0\nOne letter TLD not allowed anymore\n## 2.2.9\nOne letter TLD not allowed\n\n## 2.1.2\nFix static anlyzer warnings and update Dart SDK\n\n## 2.0.1\nMigrate to new null safety SDK. See https://dart.dev/null-safety.\n\n## 2.0.0-nullsafety\nMigrate to new null safety SDK. See https://dart.dev/null-safety.\n\n## 1.0.6\nCode clean up\n\n## 1.0.5\nAdd parameter documentation for the `EmailValidator.validate` method\n\n## 1.0.4\nFix Dart sdk analysis warnings\n  \n## 1.0.3\nMinor code style fixes again\n  \n## 1.0.2\nMinor code style fixes\n  \n## 1.0.1\nAllow international now defaults to true.\n  \n## 1.0.0\nEmailValidator.Validate() is now, by Dart convention, EmailValidator.validate().\n\n## 0.1.6\nCleaned up code a bit, no API changes.\n\n## 0.1.5\nCleaned up code a bit, no API changes.\n\n## 0.1.4\nNow supports a broader variety of Dart sdk versions\n\n## 0.1.0\nValidate emails through a static method `EmailValidator.validate()`."
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing to email-validator.dart\n\nThank you for your interest in contributing! This document explains how to set up the project locally, run tests, and submit a pull request.\n\n## Development Setup\n\nYou'll need the [Dart SDK](https://dart.dev/get-dart) installed.\n\n```bash\n# Install dependencies\ndart pub get\n```\n\n## Running Tests\n\n```bash\ndart test\n```\n\nOr to run the single test file directly:\n\n```bash\ndart test test/email_validator_test.dart\n```\n\n## Linting and Formatting\n\n```bash\n# Run the static analyser (must pass with no errors or warnings)\ndart analyze --fatal-infos\n\n# Check formatting (must pass before merging)\ndart format --output=none --set-exit-if-changed .\n\n# Auto-fix formatting\ndart format .\n```\n\n## Project Structure\n\n```\nlib/email_validator.dart   # Single-file library — all parsing logic lives here\ntest/email_validator_test.dart  # All tests; valid/invalid/international address lists\nexample/example.dart       # Short usage example\n```\n\nThe parser is cursor-based: a shared `_index` field advances through the email string inside the `EmailValidator` class. There are no external dependencies. Please keep it that way — no new dependencies without a prior discussion in an issue.\n\n## Submitting a Pull Request\n\n1. Fork the repository and create a branch from `master`.\n2. Make your changes. For bug fixes, add a regression test that fails before your fix and passes after.\n3. Ensure `dart analyze --fatal-infos` and `dart format --output=none --set-exit-if-changed .` both pass.\n4. Run `dart test` and confirm all tests pass.\n5. Open a pull request with a clear description of the problem and solution.\n\n## Reporting Bugs\n\nPlease open a GitHub issue with:\n- The email address that produces the unexpected result.\n- The expected outcome (valid/invalid) and the actual outcome.\n- The package version you are using.\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2018 Fredrik Eilertsen\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# **Email Validator.dart**\n\n[![CI](https://github.com/fredeil/email-validator.dart/actions/workflows/ci.yml/badge.svg)](https://github.com/fredeil/email-validator.dart/actions/workflows/ci.yml)\n[![pub package](https://img.shields.io/pub/v/email_validator.svg)](https://pub.dev/packages/email_validator)\n\nA simple Dart class for validating email addresses without using RegEx. Can also be used to validate emails within Flutter apps (see [Flutter email validation](https://github.com/fredeil/flutter-email-validator)).\n\n\n**NB:** This library only validates the syntax of the email, not by looking up domain or whether an email actually exists.\n\n**Featured in:**\n1. [How To Validate Emails in Flutter](https://betterprogramming.pub/how-to-validate-emails-in-flutter-957ae75926c9) by https://github.com/lucianojung\n2. [Flutter Tutorial - Email Validation In 7 Minutes](https://www.youtube.com/watch?v=mXyifVJ-NFc) by https://github.com/JohannesMilke\n3. [Flutter Tutorial - Email Validation | Package of the week](https://www.youtube.com/watch?v=ZN_7Pur5h8Q&t=31s) by https://github.com/Dhanraj-FlutterDev\n\n**Found in several big libraries and apps:**\n\n1. [Google Firebase](https://github.com/firebase/FirebaseUI-Flutter)\n1. [Flutter GenUI](https://github.com/flutter/genui)\n1. [Supabase - Flutter auth UI](https://github.com/supabase-community/flutter-auth-ui)\n1. [TubeCards - The world’s best flashcard platform](https://github.com/friebetill/TubeCards)\n1. [Serverpod - Serverpod is a next-generation app and web server, explicitly built for Flutter](https://github.com/serverpod/serverpod)\n1. [Several other packages on pub.dev](https://pub.dev/packages?q=dependency%3Aemail_validator&sort=downloads)\n\nAnd many more! \n\n\n## **Installation**\n\n### 1. Depend on it\n\nAdd this to your package's `pubspec.yaml` file:\n\n```yaml\ndependencies:\n    email_validator: '^3.0.6'\n```\n\n\n#### 2. Install it\n\nYou can install packages from the command line:\n\n```bash\n$ dart pub get\n```\n\nAlternatively, your editor might support pub. Check the docs for your editor to learn more.\n\n#### 3. Import it\n\nNow in your Dart code, you can use:\n\n```dart\nimport 'package:email_validator/email_validator.dart';\n```\n\n## **Usage**\n\nRead the unit tests under `test`, or see code example below:\n\n```dart\nvoid main() {\n    var email = 'fredrik@gmail.com';\n\n    assert(EmailValidator.validate(email));\n}\n```\n\n## Tips\n\nYou can also use this repo as a template for creating Dart packages, just clone the repo and start hacking :) \n\n"
  },
  {
    "path": "analysis_options.yaml",
    "content": "analyzer:\n  language:\n  strong-mode:\n    implicit-dynamic: false\n  errors:\n    missing_required_param: warning\n    missing_return: warning\n    todo: ignore\n\nlinter:\n  rules:\n    # these rules are documented on and in the same order as\n    # the Dart Lint rules page to make maintenance easier\n    # https://github.com/dart-lang/linter/blob/master/example/all.yaml\n    - always_declare_return_types\n    - always_put_control_body_on_new_line\n    - annotate_overrides\n    - avoid_empty_else\n    - avoid_field_initializers_in_const_classes\n    - avoid_function_literals_in_foreach_calls\n    - avoid_init_to_null\n    - avoid_null_checks_in_equality_operators\n    - avoid_relative_lib_imports\n    - avoid_renaming_method_parameters\n    - avoid_return_types_on_setters\n    - avoid_slow_async_io\n    - avoid_types_as_parameter_names\n    - avoid_unused_constructor_parameters\n    - await_only_futures\n    - camel_case_types\n    - cancel_subscriptions\n    - control_flow_in_finally\n    - directives_ordering\n    - empty_catches\n    - empty_constructor_bodies\n    - empty_statements\n    - hash_and_equals\n    - implementation_imports\n    - library_names\n    - library_prefixes\n    - no_adjacent_strings_in_list\n    - no_duplicate_case_values\n    - non_constant_identifier_names\n    - overridden_fields\n    - package_names\n    - package_prefixed_library_names\n    - prefer_adjacent_string_concatenation\n    - prefer_asserts_in_initializer_lists\n    - prefer_collection_literals\n    - prefer_conditional_assignment\n    - prefer_const_constructors\n    - prefer_const_constructors_in_immutables\n    - prefer_const_declarations\n    - prefer_const_literals_to_create_immutables\n    - prefer_contains\n    - prefer_final_fields\n    - prefer_final_locals\n    - prefer_foreach\n    - prefer_initializing_formals\n    - prefer_iterable_whereType\n    - prefer_is_empty\n    - prefer_is_not_empty\n    - prefer_single_quotes\n    - prefer_typing_uninitialized_variables\n    - recursive_getters\n    - slash_for_doc_comments\n    - sort_constructors_first\n    - sort_unnamed_constructors_first\n    - test_types_in_equals\n    - throw_in_finally\n    - type_init_formals\n    - unnecessary_brace_in_string_interps\n    - unnecessary_const\n    - unnecessary_getters_setters\n    - unnecessary_null_aware_assignments\n    - unnecessary_null_in_if_null_operators\n    - unnecessary_overrides\n    - unnecessary_parenthesis\n    - unnecessary_statements\n    - unnecessary_this\n    - unrelated_type_equality_checks\n    - use_rethrow_when_possible\n    - valid_regexps\n"
  },
  {
    "path": "example/README.md",
    "content": "# Examples\n\n* See [example.dart](./example.dart) for a basic example\n* See [fredeil/flutter-email-validator](https://github.com/fredeil/flutter-email-validator) for Flutter login example which validates email\n"
  },
  {
    "path": "example/example.dart",
    "content": "import 'package:email_validator/email_validator.dart';\n\nvoid main() {\n  // Basic validation — default: allowTopLevelDomains=false, allowInternational=true\n  final examples = [\n    ('user@example.com', 'standard address'),\n    ('invalid-email', 'missing @'),\n    ('user@', 'missing domain'),\n    ('\" \"@example.org', 'quoted space — valid per RFC'),\n  ];\n\n  print('--- Basic validation ---');\n  for (final (email, note) in examples) {\n    final valid = EmailValidator.validate(email);\n    print('  ${valid ? '✓' : '✗'} $email  ($note)');\n  }\n\n  // Allow top-level domains (useful for intranet/localhost addresses)\n  print('\\n--- allowTopLevelDomains = true ---');\n  for (final email in ['admin@localhost', 'user@intranet']) {\n    final valid = EmailValidator.validate(email, true);\n    print('  ${valid ? '✓' : '✗'} $email');\n  }\n\n  // International addresses (enabled by default)\n  print('\\n--- International addresses (default: allowed) ---');\n  final international = [\n    '伊昭傑@郵件.商務', // Chinese\n    'θσερ@εχαμπλε.ψομ', // Greek\n  ];\n  for (final email in international) {\n    final validOn = EmailValidator.validate(email); // allowInternational=true\n    final validOff =\n        EmailValidator.validate(email, false, false); // allowInternational=false\n    print('  allowed=$validOn  rejected=${!validOff}  $email');\n  }\n}\n"
  },
  {
    "path": "lib/email_validator.dart",
    "content": "library email_validator;\n\n/// The Type enum\n///\n/// The domain type is either None, Alphabetic, Numeric or AlphaNumeric\nenum SubdomainType { None, Alphabetic, Numeric, AlphaNumeric }\n\n///The EmailValidator entry point\n///\n/// To use the EmailValidator class, call EmailValidator.methodName\nclass EmailValidator {\n  static int _index = 0;\n\n  static const String _atomCharacters = \"!#\\$%&'*+-/=?^_`{|}~\";\n  static SubdomainType _domainType = SubdomainType.None;\n\n  static bool _isDigit(String c) {\n    return c.codeUnitAt(0) >= 48 && c.codeUnitAt(0) <= 57;\n  }\n\n  static bool _isLetter(String c) {\n    return (c.codeUnitAt(0) >= 65 && c.codeUnitAt(0) <= 90) ||\n        (c.codeUnitAt(0) >= 97 && c.codeUnitAt(0) <= 122);\n  }\n\n  static bool _isLetterOrDigit(String c) {\n    return _isLetter(c) || _isDigit(c);\n  }\n\n  static bool _isAtom(String c, bool allowInternational) {\n    return c.codeUnitAt(0) < 128\n        ? _isLetterOrDigit(c) || _atomCharacters.contains(c)\n        : allowInternational;\n  }\n\n  // First checks whether the first letter in string c is a letter, number or special\n  // character\n  // If calling isLetter returns true or c is '-',\n  // domainType is set to Alphabetic and the function returns true\n  // If calling isDigit returns true\n  // domainType is set to Numeric and the function returns true\n  // Otherwise the function returns false\n  //\n  // If the first if statement for string c being a letter, number or special character\n  // fails\n  // The value of allowInternational is checked where, if true,\n  // domainType is set to Alphabetic and the function returns true\n  // Otherwise, the function returns false\n  static bool _isDomain(String c, bool allowInternational) {\n    if (c.codeUnitAt(0) < 128) {\n      if (_isLetter(c) || c == '-') {\n        _domainType = SubdomainType.Alphabetic;\n        return true;\n      }\n\n      if (_isDigit(c)) {\n        _domainType = SubdomainType.Numeric;\n        return true;\n      }\n\n      return false;\n    }\n\n    if (allowInternational) {\n      _domainType = SubdomainType.Alphabetic;\n      return true;\n    }\n\n    return false;\n  }\n\n  // Returns true if domainType is not None\n  // Otherwise returns false\n  static bool _isDomainStart(String c, bool allowInternational) {\n    if (c.codeUnitAt(0) < 128) {\n      if (_isLetter(c)) {\n        _domainType = SubdomainType.Alphabetic;\n        return true;\n      }\n\n      if (_isDigit(c)) {\n        _domainType = SubdomainType.Numeric;\n        return true;\n      }\n\n      _domainType = SubdomainType.None;\n\n      return false;\n    }\n\n    if (allowInternational) {\n      _domainType = SubdomainType.Alphabetic;\n      return true;\n    }\n\n    _domainType = SubdomainType.None;\n\n    return false;\n  }\n\n  static bool _skipAtom(String text, bool allowInternational) {\n    final startIndex = _index;\n\n    while (_index < text.length && _isAtom(text[_index], allowInternational)) {\n      _index++;\n    }\n\n    return _index > startIndex;\n  }\n\n  // Skips checking of subdomain and returns false if domainType is None\n  // Otherwise returns true\n  static bool _skipSubDomain(String text, bool allowInternational) {\n    final startIndex = _index;\n\n    if (!_isDomainStart(text[_index], allowInternational)) {\n      return false;\n    }\n\n    _index++;\n\n    while (_index < text.length &&\n        _isDomain(text[_index], allowInternational)) {\n      _index++;\n    }\n\n    // 1 letter tld is not valid\n    if (_index == text.length && (_index - startIndex) == 1) {\n      return false;\n    }\n\n    return (_index - startIndex) < 64 && text[_index - 1] != '-';\n  }\n\n  // Skips checking of domain if domainType is numeric and returns false\n  // Otherwise, return true\n  static bool _skipDomain(\n    String text,\n    bool allowTopLevelDomains,\n    bool allowInternational,\n  ) {\n    if (!_skipSubDomain(text, allowInternational)) {\n      return false;\n    }\n\n    if (_index < text.length && text[_index] == '.') {\n      do {\n        _index++;\n\n        if (_index == text.length) {\n          return false;\n        }\n\n        if (!_skipSubDomain(text, allowInternational)) {\n          return false;\n        }\n      } while (_index < text.length && text[_index] == '.');\n    } else if (!allowTopLevelDomains) {\n      return false;\n    }\n\n    // Note: by allowing AlphaNumeric,\n    // we get away with not having to support punycode.\n    if (_domainType == SubdomainType.Numeric) {\n      return false;\n    }\n\n    return true;\n  }\n\n  // Function skips over quoted text where if quoted text is in the string\n  // the function returns true\n  // otherwise the function returns false\n  static bool _skipQuoted(String text, bool allowInternational) {\n    var escaped = false;\n\n    // skip over leading '\"'\n    _index++;\n\n    while (_index < text.length) {\n      if (text[_index].codeUnitAt(0) >= 128 && !allowInternational) {\n        return false;\n      }\n\n      if (text[_index] == '\\\\') {\n        escaped = !escaped;\n      } else if (!escaped) {\n        if (text[_index] == '\"') {\n          break;\n        }\n      } else {\n        escaped = false;\n      }\n\n      _index++;\n    }\n\n    if (_index >= text.length || text[_index] != '\"') {\n      return false;\n    }\n\n    _index++;\n\n    return true;\n  }\n\n  /// Attempts to parse an IPv4 address literal at the current [_index] in [text].\n  ///\n  /// Expects exactly four decimal groups in the range 0–255, separated by\n  /// dots (e.g. `127.0.0.1`).  Advances [_index] past the address on success.\n  /// Returns `true` if a valid four-octet address was consumed, `false` otherwise.\n  static bool _skipIPv4Literal(String text) {\n    var groups = 0;\n\n    while (_index < text.length && groups < 4) {\n      final startIndex = _index;\n      var value = 0;\n\n      while (_index < text.length &&\n          text[_index].codeUnitAt(0) >= 48 &&\n          text[_index].codeUnitAt(0) <= 57) {\n        value = (value * 10) + (text[_index].codeUnitAt(0) - 48);\n        _index++;\n      }\n\n      if (_index == startIndex || _index - startIndex > 3 || value > 255) {\n        return false;\n      }\n\n      groups++;\n\n      if (groups < 4 && _index < text.length && text[_index] == '.') {\n        _index++;\n      }\n    }\n\n    return groups == 4;\n  }\n\n  static bool _isHexDigit(String str) {\n    final c = str.codeUnitAt(0);\n    return (c >= 65 && c <= 70) ||\n        (c >= 97 && c <= 102) ||\n        (c >= 48 && c <= 57);\n  }\n\n  // This needs to handle the following forms:\n  //\n  // IPv6-addr = IPv6-full / IPv6-comp / IPv6v4-full / IPv6v4-comp\n  // IPv6-hex  = 1*4HEXDIG\n  // IPv6-full = IPv6-hex 7(\":\" IPv6-hex)\n  // IPv6-comp = [IPv6-hex *5(\":\" IPv6-hex)] \"::\" [IPv6-hex *5(\":\" IPv6-hex)]\n  //             ; The \"::\" represents at least 2 16-bit groups of zeros\n  //             ; No more than 6 groups in addition to the \"::\" may be\n  //             ; present\n  // IPv6v4-full = IPv6-hex 5(\":\" IPv6-hex) \":\" IPv4-address-literal\n  // IPv6v4-comp = [IPv6-hex *3(\":\" IPv6-hex)] \"::\"\n  //               [IPv6-hex *3(\":\" IPv6-hex) \":\"] IPv4-address-literal\n  //             ; The \"::\" represents at least 2 16-bit groups of zeros\n  //             ; No more than 4 groups in addition to the \"::\" and\n  //             ; IPv4-address-literal may be present\n  static bool _skipIPv6Literal(String text) {\n    var compact = false;\n    var colons = 0;\n\n    while (_index < text.length) {\n      var startIndex = _index;\n\n      while (_index < text.length && _isHexDigit(text[_index])) {\n        _index++;\n      }\n\n      if (_index >= text.length) {\n        break;\n      }\n\n      if (_index > startIndex && colons > 2 && text[_index] == '.') {\n        // IPv6v4\n        _index = startIndex;\n\n        if (!_skipIPv4Literal(text)) {\n          return false;\n        }\n\n        return compact ? colons < 6 : colons == 6;\n      }\n\n      var count = _index - startIndex;\n      if (count > 4) {\n        return false;\n      }\n\n      if (text[_index] != ':') {\n        break;\n      }\n\n      startIndex = _index;\n      while (_index < text.length && text[_index] == ':') {\n        _index++;\n      }\n\n      count = _index - startIndex;\n      if (count > 2) {\n        return false;\n      }\n\n      if (count == 2) {\n        if (compact) {\n          return false;\n        }\n\n        compact = true;\n        colons += 2;\n      } else {\n        colons++;\n      }\n    }\n\n    if (colons < 2) {\n      return false;\n    }\n\n    return compact ? colons < 7 : colons == 7;\n  }\n\n  /// Validate the specified email address.\n  ///\n  /// If [allowTopLevelDomains] is `true`, then the validator will\n  /// allow addresses with top-level domains like `email@example`.\n  ///\n  /// If [allowInternational] is `true`, then the validator\n  /// will use the newer International Email standards for validating\n  /// the email address.\n  static bool validate(\n    String email, [\n    bool allowTopLevelDomains = false,\n    bool allowInternational = true,\n  ]) {\n    _index = 0;\n\n    if (email.isEmpty || email.length >= 255) {\n      return false;\n    }\n\n    // Local-part = Dot-string / Quoted-string\n    //       ; MAY be case-sensitive\n    //\n    // Dot-string = Atom *(\".\" Atom)\n    //\n    // Quoted-string = DQUOTE *qcontent DQUOTE\n    if (email[_index] == '\"') {\n      if (!_skipQuoted(email, allowInternational) || _index >= email.length) {\n        return false;\n      }\n    } else {\n      if (!_skipAtom(email, allowInternational) || _index >= email.length) {\n        return false;\n      }\n\n      while (email[_index] == '.') {\n        _index++;\n\n        if (_index >= email.length) {\n          return false;\n        }\n\n        if (!_skipAtom(email, allowInternational)) {\n          return false;\n        }\n\n        if (_index >= email.length) {\n          return false;\n        }\n      }\n    }\n\n    if (_index + 1 >= email.length || _index > 64 || email[_index++] != '@') {\n      return false;\n    }\n\n    if (email[_index] != '[') {\n      // domain\n      if (!_skipDomain(email, allowTopLevelDomains, allowInternational)) {\n        return false;\n      }\n\n      return _index == email.length;\n    }\n\n    // address literal\n    _index++;\n\n    // we need at least 8 more characters\n    if (_index + 8 >= email.length) {\n      return false;\n    }\n\n    final ipv6 = email.substring(_index - 1).toLowerCase();\n\n    if (ipv6.contains('ipv6:')) {\n      _index += 'IPv6:'.length;\n      if (!_skipIPv6Literal(email)) {\n        return false;\n      }\n    } else {\n      if (!_skipIPv4Literal(email)) {\n        return false;\n      }\n    }\n\n    if (_index >= email.length || email[_index++] != ']') {\n      return false;\n    }\n\n    return _index == email.length;\n  }\n}\n"
  },
  {
    "path": "pubspec.yaml",
    "content": "name: email_validator\r\nversion: 3.0.6\n\r\nhomepage: https://github.com/fredeil/email-validator.dart\r\ndescription: A simple (but correct) dart class for validating email addresses\r\n\r\nenvironment:\r\n  sdk: '>=2.12.0 <4.0.0'\r\n\r\ndev_dependencies:\r\n  test: ^1.16.8\r\n\r\n"
  },
  {
    "path": "test/email_validator_test.dart",
    "content": "import 'package:email_validator/email_validator.dart';\nimport 'package:test/test.dart';\n\nvoid main() {\n  final List<String> validAddresses = [\n    'fredrik@dualog.com',\n    '\\\"Abc\\\\@def\\\"@example.com',\n    '\\\"Fred Bloggs\\\"@example.com',\n    '\\\"Joe\\\\\\\\Blow\\\"@example.com',\n    '\\\"Abc@def\\\"@example.com',\n    'customer/department=shipping@example.com',\n    '\\$A12345@example.com',\n    '!def!xyz%abc@example.com',\n    '_somename@example.com',\n    'valid.ipv4.addr@[123.1.72.10]',\n    'valid.ipv6.addr@[IPv6:0::1]',\n    'valid.ipv6.addr@[IPv6:2607:f0d0:1002:51::4]',\n    'valid.ipv6.addr@[IPv6:fe80::230:48ff:fe33:bc33]',\n    'valid.ipv6.addr@[IPv6:fe80:0000:0000:0000:0202:b3ff:fe1e:8329]',\n    'valid.ipv6v4.addr@[IPv6:aaaa:aaaa:aaaa:aaaa:aaaa:aaaa:127.0.0.1]',\n\n    // examples from wikipedia\n    'niceandsimple@example.com',\n    'very.common@example.com',\n    'a.little.lengthy.but.fine@dept.example.com',\n    'disposable.style.email.with+symbol@example.com',\n    'user@[IPv6:2001:db8:1ff::a0b:dbd0]',\n    '\\\"much.more unusual\\\"@example.com',\n    '\\\"very.unusual.@.unusual.com\\\"@example.com',\n    '\\\"very.(),:;<>[]\\\\\\\".VERY.\\\\\\\"very@\\\\\\\\ \\\\\\\"very\\\\\\\".unusual\\\"@strange.example.com',\n    \"!#\\$%&'*+-/=?^_`{}|~@example.org\",\n    \"\\\"()<>[]:,;@\\\\\\\\\\\\\\\"!#\\$%&'*+-/=?^_`{}| ~.a\\\"@example.org\",\n    '\" \"@example.org',\n\n    // examples from https://github.com/Sembiance/email-validator\n    '\\\"\\\\e\\\\s\\\\c\\\\a\\\\p\\\\e\\\\d\\\"@sld.com',\n    '\\\"back\\\\slash\\\"@sld.com',\n    '\\\"escaped\\\\\\\"quote\\\"@sld.com',\n    '\\\"quoted\\\"@sld.com',\n    '\\\"quoted-at-sign@sld.org\\\"@sld.com',\n    \"&'*+-./=?^_{}~@other-valid-characters-in-local.net\",\n    '01234567890@numbers-in-local.net',\n    'a@single-character-in-local.org',\n    'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ@letters-in-local.org',\n    'backticksarelegit@test.com',\n    'bracketed-IP-instead-of-domain@[127.0.0.1]',\n    'country-code-tld@sld.rw',\n    'country-code-tld@sld.uk',\n    'letters-in-sld@123.com',\n    'local@dash-in-sld.com',\n    'local@sld.newTLD',\n    'local@sub.domains.com',\n    'mixed-1234-in-{+^}-local@sld.net',\n    'one-character-third-level@a.example.com',\n    'one-letter-sld@x.org',\n    'punycode-numbers-in-tld@sld.xn--3e0b707e',\n    'single-character-in-sld@x.org',\n    'the-character-limit@for-each-part.of-the-domain.is-sixty-three-characters.this-is-exactly-sixty-three-characters-so-it-is-valid-blah-blah.com',\n    'the-total-length@of-an-entire-address.cannot-be-longer-than-two-hundred-and-fifty-four-characters.and-this-address-is-254-characters-exactly.so-it-should-be-valid.and-im-going-to-add-some-more-words-here.to-increase-the-length-blah-blah-blah-blah-bla.org',\n    'uncommon-tld@sld.mobi',\n    'uncommon-tld@sld.museum',\n    'uncommon-tld@sld.travel',\n  ];\n\n  final List<String> invalidAddresses = [\n    'invalid',\n    'invalid@',\n    'invalid @',\n    'invalid@[555.666.777.888]',\n    'invalid@[IPv6:123456]',\n    'invalid@[127.0.0.1.]',\n    'invalid@[127.0.0.1].',\n    'invalid@[127.0.0.1]x',\n\n    // examples from wikipedia\n    'Abc.example.com',\n    'A@b@c@example.com',\n    'a\\\"b(c)d,e:f;g<h>i[j\\\\k]l@example.com',\n    'just\\\"not\\\"right@example.com',\n    'this is\\\"not\\\\allowed@example.com',\n    'this\\\\ still\\\\\\\"not\\\\\\\\allowed@example.com',\n\n    // examples from https://github.com/Sembiance/email-validator\n    '! #\\$%`|@invalid-characters-in-local.org',\n    '(),:;`|@more-invalid-characters-in-local.org',\n    '* .local-starts-with-dot@sld.com',\n    '<>@[]`|@even-more-invalid-characters-in-local.org',\n    '@missing-local.org',\n    'IP-and-port@127.0.0.1:25',\n    'another-invalid-ip@127.0.0.256',\n    'invalid',\n    'invalid-characters-in-sld@! \\\"#\\$%(),/;<>_[]`|.org',\n    'invalid-ip@127.0.0.1.26',\n    'local-ends-with-dot.@sld.com',\n    'missing-at-sign.net',\n    'missing-sld@.com',\n    'missing-tld@sld.',\n    'sld-ends-with-dash@sld-.com',\n    'sld-starts-with-dashsh@-sld.com',\n    'the-character-limit@for-each-part.of-the-domain.is-sixty-three-characters.this-is-exactly-sixty-four-characters-so-it-is-invalid-blah-blah.com',\n    'the-local-part-is-invalid-if-it-is-longer-than-sixty-four-characters@sld.net',\n    'the-total-length@of-an-entire-address.cannot-be-longer-than-two-hundred-and-fifty-four-characters.and-this-address-is-255-characters-exactly.so-it-should-be-invalid.and-im-going-to-add-some-more-words-here.to-increase-the-lenght-blah-blah-blah-blah-bl.org',\n    'two..consecutive-dots@sld.com',\n    'unbracketed-IP@127.0.0.1',\n    'onelettertld@gmail.c',\n\n    // examples of real (invalid) input from real users.\n    'No longer available.',\n    'Moved.',\n  ];\n\n  final List<String> validInternational = [\n    '伊昭傑@郵件.商務', // Chinese\n    'राम@मोहन.ईन्फो', // Hindi\n    'юзер@екзампл.ком', // Ukranian\n    'θσερ@εχαμπλε.ψομ', // Greek\n  ];\n\n  test('Validate invalidAddresses are invalid emails', () {\n    for (var actual in invalidAddresses) {\n      expect(\n        EmailValidator.validate(actual, true),\n        equals(false),\n        reason: 'E-mail: ' + actual.toString(),\n      );\n    }\n  });\n\n  test('Validate validAddresses are valid emails', () {\n    for (var actual in validAddresses) {\n      expect(\n        EmailValidator.validate(actual, true),\n        equals(true),\n        reason: 'E-mail: ' + actual,\n      );\n    }\n  });\n\n  test('Validate validInternational are valid emails', () {\n    for (var actual in validInternational) {\n      expect(\n        EmailValidator.validate(actual, true, true),\n        equals(true),\n        reason: 'E-mail: ' + actual,\n      );\n    }\n  });\n\n  test('Validate empty and whitespace-only input is invalid', () {\n    expect(EmailValidator.validate(''), equals(false));\n    expect(EmailValidator.validate(' '), equals(false));\n    expect(EmailValidator.validate('\\t'), equals(false));\n  });\n\n  test('Validate default parameter values', () {\n    // Default: allowTopLevelDomains = false, allowInternational = true\n    expect(\n      EmailValidator.validate('user@example.com'),\n      equals(true),\n      reason: 'Standard email with defaults should be valid',\n    );\n    expect(\n      EmailValidator.validate('user@example'),\n      equals(false),\n      reason: 'Top-level domain should be rejected by default',\n    );\n    expect(\n      EmailValidator.validate('伊昭傑@郵件.商務'),\n      equals(true),\n      reason: 'International email should be valid by default',\n    );\n  });\n\n  test('Validate allowTopLevelDomains parameter', () {\n    expect(\n      EmailValidator.validate('admin@mailserver', false),\n      equals(false),\n      reason:\n          'TLD-only address should be invalid when allowTopLevelDomains is false',\n    );\n    expect(\n      EmailValidator.validate('admin@mailserver', true),\n      equals(true),\n      reason:\n          'TLD-only address should be valid when allowTopLevelDomains is true',\n    );\n    expect(\n      EmailValidator.validate('user@example', true),\n      equals(true),\n      reason:\n          'Single-label domain should be valid when allowTopLevelDomains is true',\n    );\n  });\n\n  test('Validate allowInternational parameter rejects non-ASCII when false', () {\n    expect(\n      EmailValidator.validate('伊昭傑@郵件.商務', false, false),\n      equals(false),\n      reason:\n          'International email should be invalid when allowInternational is false',\n    );\n    expect(\n      EmailValidator.validate('user@example.com', false, false),\n      equals(true),\n      reason: 'ASCII email should be valid regardless of allowInternational',\n    );\n  });\n\n  test('Validate local-part length boundary', () {\n    final local64 = 'a' * 64;\n    final local65 = 'a' * 65;\n    expect(\n      EmailValidator.validate('$local64@x.org'),\n      equals(true),\n      reason: '64-character local-part should be valid',\n    );\n    expect(\n      EmailValidator.validate('$local65@x.org'),\n      equals(false),\n      reason: '65-character local-part should be invalid',\n    );\n  });\n\n  test('Validate total email length boundary', () {\n    // The validator rejects emails with length >= 255, so 254 is max valid\n    const valid254 =\n        'the-total-length@of-an-entire-address.cannot-be-longer-than-two-hundred-and-fifty-four-characters.and-this-address-is-254-characters-exactly.so-it-should-be-valid.and-im-going-to-add-some-more-words-here.to-increase-the-length-blah-blah-blah-blah-bla.org';\n    const invalid255 =\n        'the-total-length@of-an-entire-address.cannot-be-longer-than-two-hundred-and-fifty-four-characters.and-this-address-is-255-characters-exactly.so-it-should-be-invalid.and-im-going-to-add-some-more-words-here.to-increase-the-lenght-blah-blah-blah-blah-bl.org';\n    expect(valid254.length, equals(254));\n    expect(\n      EmailValidator.validate(valid254),\n      equals(true),\n      reason: '254-character email should be valid',\n    );\n    expect(invalid255.length, equals(255));\n    expect(\n      EmailValidator.validate(invalid255),\n      equals(false),\n      reason: '255-character email should be invalid',\n    );\n  });\n\n  test('Validate domain label length boundary', () {\n    expect(\n      EmailValidator.validate(\n        'the-character-limit@for-each-part.of-the-domain.is-sixty-three-characters.this-is-exactly-sixty-three-characters-so-it-is-valid-blah-blah.com',\n      ),\n      equals(true),\n      reason: '63-character domain label should be valid',\n    );\n    expect(\n      EmailValidator.validate(\n        'the-character-limit@for-each-part.of-the-domain.is-sixty-three-characters.this-is-exactly-sixty-four-characters-so-it-is-invalid-blah-blah.com',\n      ),\n      equals(false),\n      reason: '64-character domain label should be invalid',\n    );\n  });\n\n  test('Validate domain starting or ending with hyphen is invalid', () {\n    expect(\n      EmailValidator.validate('user@-example.com'),\n      equals(false),\n      reason: 'Domain starting with hyphen should be invalid',\n    );\n    expect(\n      EmailValidator.validate('user@example-.com'),\n      equals(false),\n      reason: 'Domain label ending with hyphen should be invalid',\n    );\n  });\n\n  test('Validate double hyphens within domain are valid', () {\n    expect(\n      EmailValidator.validate('user@a--b.com'),\n      equals(true),\n      reason: 'Double hyphens within a domain label should be valid',\n    );\n  });\n\n  test('Validate numeric-only TLD is invalid', () {\n    expect(\n      EmailValidator.validate('user@example.123'),\n      equals(false),\n      reason: 'Numeric-only TLD should be invalid',\n    );\n    expect(\n      EmailValidator.validate('user@123', true),\n      equals(false),\n      reason: 'Numeric-only single-label domain should be invalid',\n    );\n  });\n\n  test('Validate domain with leading dot or trailing dot is invalid', () {\n    expect(\n      EmailValidator.validate('user@.com'),\n      equals(false),\n      reason: 'Domain with leading dot should be invalid',\n    );\n    expect(\n      EmailValidator.validate('user@com.'),\n      equals(false),\n      reason: 'Domain with trailing dot should be invalid',\n    );\n  });\n\n  test('Validate multiple subdomains are valid', () {\n    expect(\n      EmailValidator.validate('user@sub.domain.example.com'),\n      equals(true),\n      reason: 'Multiple subdomains should be valid',\n    );\n    expect(\n      EmailValidator.validate('user@a.b.c.d.e.com'),\n      equals(true),\n      reason: 'Many subdomain levels should be valid',\n    );\n    expect(\n      EmailValidator.validate('user@example.co.uk'),\n      equals(true),\n      reason: 'Country code TLD with SLD should be valid',\n    );\n  });\n\n  test('Validate local-part with dots', () {\n    expect(\n      EmailValidator.validate('.user@example.com'),\n      equals(false),\n      reason: 'Local-part starting with dot should be invalid',\n    );\n    expect(\n      EmailValidator.validate('user.@example.com'),\n      equals(false),\n      reason: 'Local-part ending with dot should be invalid',\n    );\n    expect(\n      EmailValidator.validate('user..name@example.com'),\n      equals(false),\n      reason: 'Consecutive dots in local-part should be invalid',\n    );\n    expect(\n      EmailValidator.validate('user.name@example.com'),\n      equals(true),\n      reason: 'Single dot in local-part should be valid',\n    );\n  });\n\n  test('Validate missing local-part or domain is invalid', () {\n    expect(\n      EmailValidator.validate('@example.com'),\n      equals(false),\n      reason: 'Missing local-part should be invalid',\n    );\n    expect(\n      EmailValidator.validate('user@'),\n      equals(false),\n      reason: 'Missing domain should be invalid',\n    );\n    expect(\n      EmailValidator.validate('@'),\n      equals(false),\n      reason: 'Only @ sign should be invalid',\n    );\n  });\n\n  test('Validate multiple @ signs is invalid', () {\n    expect(\n      EmailValidator.validate('user@@example.com'),\n      equals(false),\n      reason: 'Double @ should be invalid',\n    );\n  });\n\n  test('Validate spaces in email are invalid', () {\n    expect(\n      EmailValidator.validate('user name@example.com'),\n      equals(false),\n      reason: 'Space in local-part should be invalid',\n    );\n    expect(\n      EmailValidator.validate('user@exam ple.com'),\n      equals(false),\n      reason: 'Space in domain should be invalid',\n    );\n  });\n\n  test('Validate special characters in local-part', () {\n    expect(\n      EmailValidator.validate('user+tag@example.com'),\n      equals(true),\n      reason: 'Plus sign in local-part should be valid',\n    );\n    expect(\n      EmailValidator.validate('user+tag+tag2@example.com'),\n      equals(true),\n      reason: 'Multiple plus signs in local-part should be valid',\n    );\n  });\n\n  test('Validate quoted strings edge cases', () {\n    expect(\n      EmailValidator.validate('\"test\"@example.com', true),\n      equals(true),\n      reason: 'Quoted local-part should be valid',\n    );\n    expect(\n      EmailValidator.validate('\"\"@example.com', true),\n      equals(true),\n      reason: 'Empty quoted local-part should be valid',\n    );\n    expect(\n      EmailValidator.validate('\"@\"@example.com', true),\n      equals(true),\n      reason: 'Quoted @ sign in local-part should be valid',\n    );\n    expect(\n      EmailValidator.validate('\"unclosed@example.com', true),\n      equals(false),\n      reason: 'Unclosed quote should be invalid',\n    );\n  });\n\n  test('Validate IPv4 literal edge cases', () {\n    expect(\n      EmailValidator.validate('user@[255.255.255.255]'),\n      equals(true),\n      reason: 'Max octets IPv4 should be valid',\n    );\n    expect(\n      EmailValidator.validate('user@[256.0.0.0]'),\n      equals(false),\n      reason: 'IPv4 octet > 255 should be invalid',\n    );\n    expect(\n      EmailValidator.validate('user@[1.2.3]'),\n      equals(false),\n      reason: 'IPv4 with only 3 octets should be invalid',\n    );\n    expect(\n      EmailValidator.validate('user@[1.2.3.4.5]'),\n      equals(false),\n      reason: 'IPv4 with 5 octets should be invalid',\n    );\n    expect(\n      EmailValidator.validate('user@[1.2.3.]'),\n      equals(false),\n      reason: 'IPv4 with trailing dot should be invalid',\n    );\n  });\n\n  test('Validate IPv6 literal edge cases', () {\n    expect(\n      EmailValidator.validate('user@[IPv6:::1]'),\n      equals(true),\n      reason: 'IPv6 loopback should be valid',\n    );\n    expect(\n      EmailValidator.validate('user@[IPv6:1::1]'),\n      equals(true),\n      reason: 'IPv6 compact form should be valid',\n    );\n    expect(\n      EmailValidator.validate('user@[IPv6:1:2:3:4:5:6:7:8]'),\n      equals(true),\n      reason: 'IPv6 full form should be valid',\n    );\n    expect(\n      EmailValidator.validate('user@[IPv6:1:2:3:4:5:6:7:8:9]'),\n      equals(false),\n      reason: 'IPv6 with too many groups should be invalid',\n    );\n  });\n\n  test('Validate IPv6v4 literal edge cases', () {\n    expect(\n      EmailValidator.validate(\n        'user@[IPv6:aaaa:aaaa:aaaa:aaaa:aaaa:aaaa:127.0.0.1]',\n      ),\n      equals(true),\n      reason: 'Valid IPv6v4 address should be valid',\n    );\n    expect(\n      EmailValidator.validate(\n        'user@[IPv6:aaaa:aaaa:aaaa:aaaa:aaaa:aaaa:256.0.0.0]',\n      ),\n      equals(false),\n      reason: 'IPv6v4 with invalid IPv4 part should be invalid',\n    );\n  });\n\n  test('Validate unbracketed IP domain is invalid', () {\n    expect(\n      EmailValidator.validate('user@123.123.123.123'),\n      equals(false),\n      reason:\n          'Unbracketed IP should be treated as numeric domain and be invalid',\n    );\n  });\n\n  test('Validate underscore in domain is invalid', () {\n    expect(\n      EmailValidator.validate('user@exam_ple.com'),\n      equals(false),\n      reason: 'Underscore in domain should be invalid',\n    );\n  });\n\n  test('Validate single-character TLD is invalid', () {\n    expect(\n      EmailValidator.validate('a@b.c'),\n      equals(false),\n      reason: 'Single-character TLD should be invalid',\n    );\n  });\n\n  test('Validate minimal valid email addresses', () {\n    expect(\n      EmailValidator.validate('a@b.cc'),\n      equals(true),\n      reason: 'Minimal valid email should pass',\n    );\n    expect(\n      EmailValidator.validate('a@bb.cc'),\n      equals(true),\n      reason: 'Minimal email with 2-char SLD should pass',\n    );\n  });\n\n  test('Validate domain with numeric subdomain and alpha TLD', () {\n    expect(\n      EmailValidator.validate('user@123.com'),\n      equals(true),\n      reason: 'Numeric subdomain with alphabetic TLD should be valid',\n    );\n    expect(\n      EmailValidator.validate('user@123abc.com'),\n      equals(true),\n      reason: 'Alphanumeric subdomain should be valid',\n    );\n  });\n}\n"
  },
  {
    "path": "tool/run_tests.sh",
    "content": "#!/bin/bash\n\nset -e\n\nDIR=$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )/..\" && pwd )\n\ncd \"$DIR\"\n\necho \"Installing dependencies...\"\ndart pub get\n\necho \"Checking formatting...\"\ndart format --output=none --set-exit-if-changed .\n\necho \"Analyzing for warnings and type errors...\"\ndart analyze --fatal-infos\n\necho \"Running tests...\"\ndart test\n\necho -e \"\\n\\033[32m✓ OK\\033[0m\"\n"
  }
]