Full Code of fredeil/email-validator.dart for AI

master 990caab3bbe2 cached
20 files
155.7 KB
39.4k tokens
18 symbols
1 requests
Download .txt
Repository: fredeil/email-validator.dart
Branch: master
Commit: 990caab3bbe2
Files: 20
Total size: 155.7 KB

Directory structure:
gitextract_vfl1p9sy/

├── .gitattributes
├── .github/
│   ├── FUNDING.yml
│   ├── copilot-instructions.md
│   └── workflows/
│       ├── ci.yml
│       ├── publish.yaml
│       ├── release.yml
│       ├── repo-assist.lock.yml
│       └── repo-assist.md
├── .gitignore
├── CHANGELOG.md
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── analysis_options.yaml
├── example/
│   ├── README.md
│   └── example.dart
├── lib/
│   └── email_validator.dart
├── pubspec.yaml
├── test/
│   └── email_validator_test.dart
└── tool/
    └── run_tests.sh

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitattributes
================================================
.github/workflows/*.lock.yml linguist-generated=true merge=ours

================================================
FILE: .github/FUNDING.yml
================================================
# These are supported funding model platforms

github: fredeil
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']


================================================
FILE: .github/copilot-instructions.md
================================================
# Copilot Instructions

## Commands

```bash
dart pub get          # install dependencies
dart test             # run all tests
dart test test/email_validator_test.dart  # run the single test file
dart analyze --fatal-infos  # lint/analyze
dart format --output=none --set-exit-if-changed .  # check formatting
dart format .         # auto-format
```

## Architecture

This is a minimal single-file Dart package (`lib/email_validator.dart`) that validates email addresses without using RegEx, implementing RFC-compliant parsing manually.

The `EmailValidator` class uses a **cursor-based parser** with a shared static `_index` field that advances through the email string. Parsing proceeds in two phases:
1. **Local part** – either a quoted string (`_skipQuoted`) or dot-separated atoms (`_skipAtom`)
2. **Domain part** – either a domain name (`_skipDomain`/`_skipSubDomain`) or an address literal (`_skipIPv4Literal`/`_skipIPv6Literal`)

`SubdomainType` enum tracks whether the current subdomain is alphabetic, numeric, or alphanumeric — used to reject all-numeric TLDs (e.g. `user@127.0.0.1` without brackets).

Public API is a single static method:
```dart
EmailValidator.validate(String email, [bool allowTopLevelDomains = false, bool allowInternational = true])
```

## Key Conventions

- All private helpers are `static` and mutate the shared `static int _index` — the class is stateless between calls (reset at the start of `validate`), but **not thread-safe**.
- `allowInternational` controls whether non-ASCII characters (codeUnit ≥ 128) are accepted in local part and domain labels.
- Test cases in `test/email_validator_test.dart` are the canonical source of truth for expected behavior — valid/invalid/international address lists are maintained there directly.
- Releases are managed via the `Release` GitHub Actions workflow (manual `workflow_dispatch`), which bumps `pubspec.yaml`, updates `CHANGELOG.md`, commits, and tags. Publishing to pub.dev triggers automatically on version tags matching `v*.*.*`.
- When updating the version, update both `pubspec.yaml` and `CHANGELOG.md`.


================================================
FILE: .github/workflows/ci.yml
================================================
name: CI

on:
  push:
    branches: [master]
  pull_request:
    branches: [master]

jobs:
  test:
    runs-on: ubuntu-latest
    permissions:
      contents: read
    steps:
      - uses: actions/checkout@v4
      - uses: dart-lang/setup-dart@v1
      - run: dart pub get
      - run: dart format --output=none --set-exit-if-changed .
      - run: dart analyze --fatal-infos
      - run: dart test


================================================
FILE: .github/workflows/publish.yaml
================================================
name: Publish to pub.dev

on:
  push:
    tags:
      - 'v[0-9]+.[0-9]+.[0-9]+*'

jobs:
  publish:
    permissions:
      id-token: write
    uses: dart-lang/setup-dart/.github/workflows/publish.yml@v1


================================================
FILE: .github/workflows/release.yml
================================================
name: Release

on:
  workflow_dispatch:
    inputs:
      version:
        description: "Version: package version and git tag to create (e.g. 3.1.0)"
        required: true

      changelog:
        description: "CHANGELOG.md notes"
        required: true
        default: "Bug fixes and performance improvements"

      pre_release:
        description: "Whether the release is a prerelease"
        required: true
        default: "false"
        type: choice
        options:
          - "false"
          - "true"

permissions:
  contents: write

jobs:
  release:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - uses: dart-lang/setup-dart@v1
        with:
          sdk: stable

      - name: Install dependencies
        run: dart pub get

      - name: Verify formatting
        run: dart format --output=none --set-exit-if-changed .

      - name: Analyze
        run: dart analyze --fatal-infos

      - name: Run tests
        run: dart test

      - name: Determine version tag
        id: version
        run: |
          TAG=${{ inputs.version }}
          if [ "${{ inputs.pre_release }}" = "true" ]; then
            TAG="${TAG}-dev"
          fi
          echo "tag=${TAG}" >> "$GITHUB_OUTPUT"

      - name: Update pubspec.yaml version
        run: |
          sed -i "s/^version: .*/version: ${{ steps.version.outputs.tag }}/" pubspec.yaml

      - name: Update CHANGELOG.md
        run: |
          printf '%s\n%s\n\n%s' "## ${{ steps.version.outputs.tag }}" "${{ inputs.changelog }}" "$(cat CHANGELOG.md)" > CHANGELOG.md

      - name: Commit and push version bump
        run: |
          git config user.name "github-actions[bot]"
          git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
          git add pubspec.yaml CHANGELOG.md
          git commit -m "chore: bump version to ${{ steps.version.outputs.tag }}"
          git push origin HEAD:${{ github.ref_name }}

      - name: Create GitHub release
        uses: softprops/action-gh-release@v2
        with:
          tag_name: v${{ steps.version.outputs.tag }}
          body: ${{ inputs.changelog }}
          prerelease: ${{ inputs.pre_release == 'true' }}
          target_commitish: ${{ github.ref_name }}


================================================
FILE: .github/workflows/repo-assist.lock.yml
================================================
#
#    ___                   _   _      
#   / _ \                 | | (_)     
#  | |_| | __ _  ___ _ __ | |_ _  ___ 
#  |  _  |/ _` |/ _ \ '_ \| __| |/ __|
#  | | | | (_| |  __/ | | | |_| | (__ 
#  \_| |_/\__, |\___|_| |_|\__|_|\___|
#          __/ |
#  _    _ |___/ 
# | |  | |                / _| |
# | |  | | ___ _ __ _  __| |_| | _____      ____
# | |/\| |/ _ \ '__| |/ /|  _| |/ _ \ \ /\ / / ___|
# \  /\  / (_) | | | | ( | | | | (_) \ V  V /\__ \
#  \/  \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
#
# This file was automatically generated by gh-aw (v0.50.2). DO NOT EDIT.
#
# To update this file, edit githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd and run:
#   gh aw compile
# Not all edits will cause changes to this file.
#
# For more information: https://github.github.com/gh-aw/introduction/overview/
#
# A friendly repository assistant that runs daily to support contributors and maintainers.
# Can also be triggered on-demand via '/repo-assist <instructions>' to perform specific tasks.
# - Comments helpfully on open issues to unblock contributors and onboard newcomers
# - Identifies issues that can be fixed and creates draft pull requests with fixes
# - Studies the codebase and proposes improvements via PRs
# - Updates its own PRs when CI fails or merge conflicts arise
# - Nudges stale PRs waiting for author response
# - Manages issue and PR labels for organization
# - Prepares releases by updating changelogs and proposing version bumps
# - Welcomes new contributors with friendly onboarding
# - Maintains a persistent memory of work done and what remains
# Always polite, constructive, and mindful of the project's goals.
#
# Source: githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd
#
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"c5b7fa09add2feca4fc9f81e36b59e7bef152488b3f2d34bda0bec18b300e050","compiler_version":"v0.50.2"}

name: "Repo Assist"
"on":
  discussion:
    types:
    - created
    - edited
  discussion_comment:
    types:
    - created
    - edited
  issue_comment:
    types:
    - created
    - edited
  issues:
    types:
    - opened
    - edited
    - reopened
  pull_request:
    types:
    - opened
    - edited
    - reopened
  pull_request_review_comment:
    types:
    - created
    - edited
  schedule:
  - cron: "20 7 * * *"
  workflow_dispatch: null

permissions: {}

concurrency:
  group: "gh-aw-${{ github.workflow }}-${{ github.event.issue.number || github.event.pull_request.number }}"

run-name: "Repo Assist"

jobs:
  activation:
    needs: pre_activation
    if: >
      (needs.pre_activation.outputs.activated == 'true') && (((github.event_name == 'issues' || github.event_name == 'issue_comment' ||
      github.event_name == 'pull_request' || github.event_name == 'pull_request_review_comment' || github.event_name == 'discussion' ||
      github.event_name == 'discussion_comment') && ((github.event_name == 'issues') && ((startsWith(github.event.issue.body, '/repo-assist ')) ||
      (github.event.issue.body == '/repo-assist')) || (github.event_name == 'issue_comment') && (((startsWith(github.event.comment.body, '/repo-assist ')) ||
      (github.event.comment.body == '/repo-assist')) && (github.event.issue.pull_request == null)) || (github.event_name == 'issue_comment') &&
      (((startsWith(github.event.comment.body, '/repo-assist ')) || (github.event.comment.body == '/repo-assist')) &&
      (github.event.issue.pull_request != null)) || (github.event_name == 'pull_request_review_comment') &&
      ((startsWith(github.event.comment.body, '/repo-assist ')) || (github.event.comment.body == '/repo-assist')) ||
      (github.event_name == 'pull_request') && ((startsWith(github.event.pull_request.body, '/repo-assist ')) ||
      (github.event.pull_request.body == '/repo-assist')) || (github.event_name == 'discussion') && ((startsWith(github.event.discussion.body, '/repo-assist ')) ||
      (github.event.discussion.body == '/repo-assist')) || (github.event_name == 'discussion_comment') && ((startsWith(github.event.comment.body, '/repo-assist ')) ||
      (github.event.comment.body == '/repo-assist')))) || (!(github.event_name == 'issues' || github.event_name == 'issue_comment' ||
      github.event_name == 'pull_request' || github.event_name == 'pull_request_review_comment' || github.event_name == 'discussion' ||
      github.event_name == 'discussion_comment')))
    runs-on: ubuntu-slim
    permissions:
      contents: read
      discussions: write
      issues: write
      pull-requests: write
    outputs:
      body: ${{ steps.sanitized.outputs.body }}
      comment_id: ""
      comment_repo: ""
      slash_command: ${{ needs.pre_activation.outputs.matched_command }}
      text: ${{ steps.sanitized.outputs.text }}
      title: ${{ steps.sanitized.outputs.title }}
    steps:
      - name: Setup Scripts
        uses: github/gh-aw/actions/setup@e32435511ac2c5aa0e08b19284a25dc98fadf1e1 # v0.50.2
        with:
          destination: /opt/gh-aw/actions
      - name: Validate context variables
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        with:
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/validate_context_variables.cjs');
            await main();
      - name: Checkout .github and .agents folders
        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
        with:
          sparse-checkout: |
            .github
            .agents
          fetch-depth: 1
          persist-credentials: false
      - name: Check workflow file timestamps
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_WORKFLOW_FILE: "repo-assist.lock.yml"
        with:
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/check_workflow_timestamp_api.cjs');
            await main();
      - name: Compute current body text
        id: sanitized
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        with:
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/compute_text.cjs');
            await main();
      - name: Create prompt with built-in context
        env:
          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
          GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
          GH_AW_GITHUB_ACTOR: ${{ github.actor }}
          GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }}
          GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }}
          GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }}
          GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }}
          GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}
          GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
          GH_AW_GITHUB_SERVER_URL: ${{ github.server_url }}
          GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
          GH_AW_IS_PR_COMMENT: ${{ github.event.issue.pull_request && 'true' || '' }}
          GH_AW_STEPS_SANITIZED_OUTPUTS_TEXT: ${{ steps.sanitized.outputs.text }}
        run: |
          bash /opt/gh-aw/actions/create_prompt_first.sh
          {
          cat << 'GH_AW_PROMPT_EOF'
          <system>
          GH_AW_PROMPT_EOF
          cat "/opt/gh-aw/prompts/xpia.md"
          cat "/opt/gh-aw/prompts/temp_folder_prompt.md"
          cat "/opt/gh-aw/prompts/markdown.md"
          cat "/opt/gh-aw/prompts/repo_memory_prompt.md"
          cat "/opt/gh-aw/prompts/safe_outputs_prompt.md"
          cat << 'GH_AW_PROMPT_EOF'
          <safe-output-tools>
          Tools: add_comment, create_issue, update_issue, create_pull_request, add_labels, remove_labels, push_to_pull_request_branch, missing_tool, missing_data
          GH_AW_PROMPT_EOF
          cat "/opt/gh-aw/prompts/safe_outputs_create_pull_request.md"
          cat "/opt/gh-aw/prompts/safe_outputs_push_to_pr_branch.md"
          cat << 'GH_AW_PROMPT_EOF'
          </safe-output-tools>
          <github-context>
          The following GitHub context information is available for this workflow:
          {{#if __GH_AW_GITHUB_ACTOR__ }}
          - **actor**: __GH_AW_GITHUB_ACTOR__
          {{/if}}
          {{#if __GH_AW_GITHUB_REPOSITORY__ }}
          - **repository**: __GH_AW_GITHUB_REPOSITORY__
          {{/if}}
          {{#if __GH_AW_GITHUB_WORKSPACE__ }}
          - **workspace**: __GH_AW_GITHUB_WORKSPACE__
          {{/if}}
          {{#if __GH_AW_GITHUB_EVENT_ISSUE_NUMBER__ }}
          - **issue-number**: #__GH_AW_GITHUB_EVENT_ISSUE_NUMBER__
          {{/if}}
          {{#if __GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER__ }}
          - **discussion-number**: #__GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER__
          {{/if}}
          {{#if __GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER__ }}
          - **pull-request-number**: #__GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER__
          {{/if}}
          {{#if __GH_AW_GITHUB_EVENT_COMMENT_ID__ }}
          - **comment-id**: __GH_AW_GITHUB_EVENT_COMMENT_ID__
          {{/if}}
          {{#if __GH_AW_GITHUB_RUN_ID__ }}
          - **workflow-run-id**: __GH_AW_GITHUB_RUN_ID__
          {{/if}}
          </github-context>
          
          GH_AW_PROMPT_EOF
          if [ "$GITHUB_EVENT_NAME" = "issue_comment" ] && [ -n "$GH_AW_IS_PR_COMMENT" ] || [ "$GITHUB_EVENT_NAME" = "pull_request_review_comment" ] || [ "$GITHUB_EVENT_NAME" = "pull_request_review" ]; then
            cat "/opt/gh-aw/prompts/pr_context_prompt.md"
          fi
          cat << 'GH_AW_PROMPT_EOF'
          </system>
          GH_AW_PROMPT_EOF
          cat << 'GH_AW_PROMPT_EOF'
          {{#runtime-import .github/workflows/repo-assist.md}}
          GH_AW_PROMPT_EOF
          } > "$GH_AW_PROMPT"
      - name: Interpolate variables and render templates
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
          GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}
          GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
          GH_AW_GITHUB_SERVER_URL: ${{ github.server_url }}
          GH_AW_STEPS_SANITIZED_OUTPUTS_TEXT: ${{ steps.sanitized.outputs.text }}
        with:
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/interpolate_prompt.cjs');
            await main();
      - name: Substitute placeholders
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
          GH_AW_GITHUB_ACTOR: ${{ github.actor }}
          GH_AW_GITHUB_EVENT_COMMENT_ID: ${{ github.event.comment.id }}
          GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: ${{ github.event.discussion.number }}
          GH_AW_GITHUB_EVENT_ISSUE_NUMBER: ${{ github.event.issue.number }}
          GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }}
          GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}
          GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
          GH_AW_GITHUB_SERVER_URL: ${{ github.server_url }}
          GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
          GH_AW_IS_PR_COMMENT: ${{ github.event.issue.pull_request && 'true' || '' }}
          GH_AW_MEMORY_BRANCH_NAME: 'memory/repo-assist'
          GH_AW_MEMORY_CONSTRAINTS: "\n\n**Constraints:**\n- **Max File Size**: 10240 bytes (0.01 MB) per file\n- **Max File Count**: 100 files per commit\n- **Max Patch Size**: 10240 bytes (10 KB) total per push (max: 100 KB)\n"
          GH_AW_MEMORY_DESCRIPTION: ''
          GH_AW_MEMORY_DIR: '/tmp/gh-aw/repo-memory/default/'
          GH_AW_MEMORY_TARGET_REPO: ' of the current repository'
          GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: ${{ needs.pre_activation.outputs.activated }}
          GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: ${{ needs.pre_activation.outputs.matched_command }}
          GH_AW_STEPS_SANITIZED_OUTPUTS_TEXT: ${{ steps.sanitized.outputs.text }}
        with:
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            
            const substitutePlaceholders = require('/opt/gh-aw/actions/substitute_placeholders.cjs');
            
            // Call the substitution function
            return await substitutePlaceholders({
              file: process.env.GH_AW_PROMPT,
              substitutions: {
                GH_AW_GITHUB_ACTOR: process.env.GH_AW_GITHUB_ACTOR,
                GH_AW_GITHUB_EVENT_COMMENT_ID: process.env.GH_AW_GITHUB_EVENT_COMMENT_ID,
                GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER: process.env.GH_AW_GITHUB_EVENT_DISCUSSION_NUMBER,
                GH_AW_GITHUB_EVENT_ISSUE_NUMBER: process.env.GH_AW_GITHUB_EVENT_ISSUE_NUMBER,
                GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER,
                GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY,
                GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID,
                GH_AW_GITHUB_SERVER_URL: process.env.GH_AW_GITHUB_SERVER_URL,
                GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE,
                GH_AW_IS_PR_COMMENT: process.env.GH_AW_IS_PR_COMMENT,
                GH_AW_MEMORY_BRANCH_NAME: process.env.GH_AW_MEMORY_BRANCH_NAME,
                GH_AW_MEMORY_CONSTRAINTS: process.env.GH_AW_MEMORY_CONSTRAINTS,
                GH_AW_MEMORY_DESCRIPTION: process.env.GH_AW_MEMORY_DESCRIPTION,
                GH_AW_MEMORY_DIR: process.env.GH_AW_MEMORY_DIR,
                GH_AW_MEMORY_TARGET_REPO: process.env.GH_AW_MEMORY_TARGET_REPO,
                GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED,
                GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND,
                GH_AW_STEPS_SANITIZED_OUTPUTS_TEXT: process.env.GH_AW_STEPS_SANITIZED_OUTPUTS_TEXT
              }
            });
      - name: Validate prompt placeholders
        env:
          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
        run: bash /opt/gh-aw/actions/validate_prompt_placeholders.sh
      - name: Print prompt
        env:
          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
        run: bash /opt/gh-aw/actions/print_prompt_summary.sh
      - name: Upload prompt artifact
        if: success()
        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
        with:
          name: prompt
          path: /tmp/gh-aw/aw-prompts/prompt.txt
          retention-days: 1

  agent:
    needs: activation
    runs-on: ubuntu-latest
    permissions: read-all
    env:
      DEFAULT_BRANCH: ${{ github.event.repository.default_branch }}
      GH_AW_ASSETS_ALLOWED_EXTS: ""
      GH_AW_ASSETS_BRANCH: ""
      GH_AW_ASSETS_MAX_SIZE_KB: 0
      GH_AW_MCP_LOG_DIR: /tmp/gh-aw/mcp-logs/safeoutputs
      GH_AW_SAFE_OUTPUTS: /opt/gh-aw/safeoutputs/outputs.jsonl
      GH_AW_SAFE_OUTPUTS_CONFIG_PATH: /opt/gh-aw/safeoutputs/config.json
      GH_AW_SAFE_OUTPUTS_TOOLS_PATH: /opt/gh-aw/safeoutputs/tools.json
      GH_AW_WORKFLOW_ID_SANITIZED: repoassist
    outputs:
      checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }}
      detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
      detection_success: ${{ steps.detection_conclusion.outputs.success }}
      has_patch: ${{ steps.collect_output.outputs.has_patch }}
      model: ${{ steps.generate_aw_info.outputs.model }}
      output: ${{ steps.collect_output.outputs.output }}
      output_types: ${{ steps.collect_output.outputs.output_types }}
      secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
    steps:
      - name: Setup Scripts
        uses: github/gh-aw/actions/setup@e32435511ac2c5aa0e08b19284a25dc98fadf1e1 # v0.50.2
        with:
          destination: /opt/gh-aw/actions
      - name: Checkout repository
        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
        with:
          persist-credentials: false
      - name: Create gh-aw temp directory
        run: bash /opt/gh-aw/actions/create_gh_aw_tmp_dir.sh
      # Repo memory git-based storage configuration from frontmatter processed below
      - name: Clone repo-memory branch (default)
        env:
          GH_TOKEN: ${{ github.token }}
          GITHUB_SERVER_URL: ${{ github.server_url }}
          BRANCH_NAME: memory/repo-assist
          TARGET_REPO: ${{ github.repository }}
          MEMORY_DIR: /tmp/gh-aw/repo-memory/default
          CREATE_ORPHAN: true
        run: bash /opt/gh-aw/actions/clone_repo_memory_branch.sh
      - name: Configure Git credentials
        env:
          REPO_NAME: ${{ github.repository }}
          SERVER_URL: ${{ github.server_url }}
        run: |
          git config --global user.email "github-actions[bot]@users.noreply.github.com"
          git config --global user.name "github-actions[bot]"
          git config --global am.keepcr true
          # Re-authenticate git with GitHub token
          SERVER_URL_STRIPPED="${SERVER_URL#https://}"
          git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
          echo "Git configured with standard GitHub Actions identity"
      - name: Checkout PR branch
        id: checkout-pr
        if: |
          github.event.pull_request
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
        with:
          github-token: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
            await main();
      - name: Generate agentic run info
        id: generate_aw_info
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        with:
          script: |
            const fs = require('fs');
            
            const awInfo = {
              engine_id: "copilot",
              engine_name: "GitHub Copilot CLI",
              model: process.env.GH_AW_MODEL_AGENT_COPILOT || "",
              version: "",
              agent_version: "0.0.415",
              cli_version: "v0.50.2",
              workflow_name: "Repo Assist",
              experimental: false,
              supports_tools_allowlist: true,
              run_id: context.runId,
              run_number: context.runNumber,
              run_attempt: process.env.GITHUB_RUN_ATTEMPT,
              repository: context.repo.owner + '/' + context.repo.repo,
              ref: context.ref,
              sha: context.sha,
              actor: context.actor,
              event_name: context.eventName,
              staged: false,
              allowed_domains: ["defaults","dotnet","node","python","rust","java"],
              firewall_enabled: true,
              awf_version: "v0.23.0",
              awmg_version: "v0.1.5",
              steps: {
                firewall: "squid"
              },
              created_at: new Date().toISOString()
            };
            
            // Write to /tmp/gh-aw directory to avoid inclusion in PR
            const tmpPath = '/tmp/gh-aw/aw_info.json';
            fs.writeFileSync(tmpPath, JSON.stringify(awInfo, null, 2));
            console.log('Generated aw_info.json at:', tmpPath);
            console.log(JSON.stringify(awInfo, null, 2));
            
            // Set model as output for reuse in other steps/jobs
            core.setOutput('model', awInfo.model);
      - name: Validate COPILOT_GITHUB_TOKEN secret
        id: validate-secret
        run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
        env:
          COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
      - name: Install GitHub Copilot CLI
        run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.415
      - name: Install awf binary
        run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
      - name: Determine automatic lockdown mode for GitHub MCP Server
        id: determine-automatic-lockdown
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
          GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
        with:
          script: |
            const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
            await determineAutomaticLockdown(github, context, core);
      - name: Download container images
        run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.5 ghcr.io/github/github-mcp-server:v0.31.0 node:lts-alpine
      - name: Write Safe Outputs Config
        run: |
          mkdir -p /opt/gh-aw/safeoutputs
          mkdir -p /tmp/gh-aw/safeoutputs
          mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs
          cat > /opt/gh-aw/safeoutputs/config.json << 'GH_AW_SAFE_OUTPUTS_CONFIG_EOF'
          {"add_comment":{"max":10,"target":"*"},"add_labels":{"allowed":["bug","enhancement","help wanted","good first issue","spam","off topic","documentation","question","duplicate","wontfix","needs triage","needs investigation","breaking change","performance","security","refactor"],"max":30,"target":"*"},"create_issue":{"max":4},"create_pull_request":{"max":4},"missing_data":{},"missing_tool":{},"noop":{"max":1},"push_to_pull_request_branch":{"max":4,"target":"*"},"remove_labels":{"allowed":["bug","enhancement","help wanted","good first issue","spam","off topic","documentation","question","duplicate","wontfix","needs triage","needs investigation","breaking change","performance","security","refactor"],"max":5},"update_issue":{"max":1}}
          GH_AW_SAFE_OUTPUTS_CONFIG_EOF
          cat > /opt/gh-aw/safeoutputs/tools.json << 'GH_AW_SAFE_OUTPUTS_TOOLS_EOF'
          [
            {
              "description": "Create a new GitHub issue for tracking bugs, feature requests, or tasks. Use this for actionable work items that need assignment, labeling, and status tracking. For reports, announcements, or status updates that don't require task tracking, use create_discussion instead. CONSTRAINTS: Maximum 4 issue(s) can be created. Title will be prefixed with \"[Repo Assist] \". Labels [automation repo-assist] will be automatically added.",
              "inputSchema": {
                "additionalProperties": false,
                "properties": {
                  "body": {
                    "description": "Detailed issue description in Markdown. Do NOT repeat the title as a heading since it already appears as the issue's h1. Include context, reproduction steps, or acceptance criteria as appropriate.",
                    "type": "string"
                  },
                  "labels": {
                    "description": "Labels to categorize the issue (e.g., 'bug', 'enhancement'). Labels must exist in the repository.",
                    "items": {
                      "type": "string"
                    },
                    "type": "array"
                  },
                  "parent": {
                    "description": "Parent issue number for creating sub-issues. This is the numeric ID from the GitHub URL (e.g., 42 in github.com/owner/repo/issues/42). Can also be a temporary_id (e.g., 'aw_abc123', 'aw_Test123') from a previously created issue in the same workflow run.",
                    "type": [
                      "number",
                      "string"
                    ]
                  },
                  "temporary_id": {
                    "description": "Unique temporary identifier for referencing this issue before it's created. Format: 'aw_' followed by 3 to 8 alphanumeric characters (e.g., 'aw_abc1', 'aw_Test123'). Use '#aw_ID' in body text to reference other issues by their temporary_id; these are replaced with actual issue numbers after creation.",
                    "pattern": "^aw_[A-Za-z0-9]{3,8}$",
                    "type": "string"
                  },
                  "title": {
                    "description": "Concise issue title summarizing the bug, feature, or task. The title appears as the main heading, so keep it brief and descriptive.",
                    "type": "string"
                  }
                },
                "required": [
                  "title",
                  "body"
                ],
                "type": "object"
              },
              "name": "create_issue"
            },
            {
              "description": "Add a comment to an existing GitHub issue, pull request, or discussion. Use this to provide feedback, answer questions, or add information to an existing conversation. For creating new items, use create_issue, create_discussion, or create_pull_request instead. IMPORTANT: Comments are subject to validation constraints enforced by the MCP server - maximum 65536 characters for the complete comment (including footer which is added automatically), 10 mentions (@username), and 50 links. Exceeding these limits will result in an immediate error with specific guidance. NOTE: By default, this tool requires discussions:write permission. If your GitHub App lacks Discussions permission, set 'discussions: false' in the workflow's safe-outputs.add-comment configuration to exclude this permission. CONSTRAINTS: Maximum 10 comment(s) can be added. Target: *.",
              "inputSchema": {
                "additionalProperties": false,
                "properties": {
                  "body": {
                    "description": "The comment text in Markdown format. This is the 'body' field - do not use 'comment_body' or other variations. Provide helpful, relevant information that adds value to the conversation. CONSTRAINTS: The complete comment (your body text + automatically added footer) must not exceed 65536 characters total. Maximum 10 mentions (@username), maximum 50 links (http/https URLs). A footer (~200-500 characters) is automatically appended with workflow attribution, so leave adequate space. If these limits are exceeded, the tool call will fail with a detailed error message indicating which constraint was violated.",
                    "type": "string"
                  },
                  "item_number": {
                    "description": "The issue, pull request, or discussion number to comment on. This is the numeric ID from the GitHub URL (e.g., 123 in github.com/owner/repo/issues/123). If omitted, the tool auto-targets the issue, PR, or discussion that triggered this workflow. Auto-targeting only works for issue, pull_request, discussion, and comment event triggers — it does NOT work for schedule, workflow_dispatch, push, or workflow_run triggers. For those trigger types, always provide item_number explicitly, or the comment will be silently discarded.",
                    "type": "number"
                  }
                },
                "required": [
                  "body"
                ],
                "type": "object"
              },
              "name": "add_comment"
            },
            {
              "description": "Create a new GitHub pull request to propose code changes. Use this after making file edits to submit them for review and merging. The PR will be created from the current branch with your committed changes. For code review comments on an existing PR, use create_pull_request_review_comment instead. CONSTRAINTS: Maximum 4 pull request(s) can be created. Title will be prefixed with \"[Repo Assist] \". Labels [automation repo-assist] will be automatically added. PRs will be created as drafts.",
              "inputSchema": {
                "additionalProperties": false,
                "properties": {
                  "body": {
                    "description": "Detailed PR description in Markdown. Include what changes were made, why, testing notes, and any breaking changes. Do NOT repeat the title as a heading.",
                    "type": "string"
                  },
                  "branch": {
                    "description": "Source branch name containing the changes. If omitted, uses the current working branch.",
                    "type": "string"
                  },
                  "draft": {
                    "description": "Whether to create the PR as a draft. Draft PRs cannot be merged until marked as ready for review. Use mark_pull_request_as_ready_for_review to convert a draft PR. Default: true.",
                    "type": "boolean"
                  },
                  "labels": {
                    "description": "Labels to categorize the PR (e.g., 'enhancement', 'bugfix'). Labels must exist in the repository.",
                    "items": {
                      "type": "string"
                    },
                    "type": "array"
                  },
                  "title": {
                    "description": "Concise PR title describing the changes. Follow repository conventions (e.g., conventional commits). The title appears as the main heading.",
                    "type": "string"
                  }
                },
                "required": [
                  "title",
                  "body"
                ],
                "type": "object"
              },
              "name": "create_pull_request"
            },
            {
              "description": "Add labels to an existing GitHub issue or pull request for categorization and filtering. Labels must already exist in the repository. For creating new issues with labels, use create_issue with the labels property instead. CONSTRAINTS: Maximum 30 label(s) can be added. Only these labels are allowed: [bug enhancement help wanted good first issue spam off topic documentation question duplicate wontfix needs triage needs investigation breaking change performance security refactor]. Target: *.",
              "inputSchema": {
                "additionalProperties": false,
                "properties": {
                  "item_number": {
                    "description": "Issue or PR number to add labels to. This is the numeric ID from the GitHub URL (e.g., 456 in github.com/owner/repo/issues/456). If omitted, adds labels to the issue or PR that triggered this workflow. Only works for issue or pull_request event triggers. For schedule, workflow_dispatch, or other triggers, item_number is required — omitting it will silently skip the label operation.",
                    "type": "number"
                  },
                  "labels": {
                    "description": "Label names to add (e.g., ['bug', 'priority-high']). Labels must exist in the repository.",
                    "items": {
                      "type": "string"
                    },
                    "type": "array"
                  }
                },
                "type": "object"
              },
              "name": "add_labels"
            },
            {
              "description": "Remove labels from an existing GitHub issue or pull request. Silently skips labels that don't exist on the item. Use this to clean up labels or manage label lifecycles (e.g., removing 'needs-review' after review is complete). CONSTRAINTS: Maximum 5 label(s) can be removed. Only these labels can be removed: [bug enhancement help wanted good first issue spam off topic documentation question duplicate wontfix needs triage needs investigation breaking change performance security refactor]. Target: *.",
              "inputSchema": {
                "additionalProperties": false,
                "properties": {
                  "item_number": {
                    "description": "Issue or PR number to remove labels from. This is the numeric ID from the GitHub URL (e.g., 456 in github.com/owner/repo/issues/456). If omitted, removes labels from the item that triggered this workflow.",
                    "type": "number"
                  },
                  "labels": {
                    "description": "Label names to remove (e.g., ['smoke', 'needs-triage']). Non-existent labels are silently skipped.",
                    "items": {
                      "type": "string"
                    },
                    "type": "array"
                  }
                },
                "required": [
                  "labels"
                ],
                "type": "object"
              },
              "name": "remove_labels"
            },
            {
              "description": "Update an existing GitHub issue's title, body, labels, assignees, or milestone WITHOUT closing it. This tool is primarily for editing issue metadata and content. While it supports changing status between 'open' and 'closed', use close_issue instead when you want to close an issue with a closing comment. Body updates support replacing, appending to, prepending content, or updating a per-run \"island\" section. CONSTRAINTS: Maximum 1 issue(s) can be updated. Target: *.",
              "inputSchema": {
                "additionalProperties": false,
                "properties": {
                  "assignees": {
                    "description": "Replace the issue assignees with this list of GitHub usernames (e.g., ['octocat', 'mona']).",
                    "items": {
                      "type": "string"
                    },
                    "type": "array"
                  },
                  "body": {
                    "description": "Issue body content in Markdown. For 'replace', this becomes the entire body. For 'append'/'prepend', this content is added with a separator and an attribution footer. For 'replace-island', only the run-specific section is updated.",
                    "type": "string"
                  },
                  "issue_number": {
                    "description": "Issue number to update. This is the numeric ID from the GitHub URL (e.g., 789 in github.com/owner/repo/issues/789). Required when the workflow target is '*' (any issue).",
                    "type": [
                      "number",
                      "string"
                    ]
                  },
                  "labels": {
                    "description": "Replace the issue labels with this list (e.g., ['bug', 'tracking:foo']). Labels must exist in the repository.",
                    "items": {
                      "type": "string"
                    },
                    "type": "array"
                  },
                  "milestone": {
                    "description": "Milestone number to assign (e.g., 1). Use null to clear.",
                    "type": [
                      "number",
                      "string"
                    ]
                  },
                  "operation": {
                    "description": "How to update the issue body: 'append' (default - add to end with separator), 'prepend' (add to start with separator), 'replace' (overwrite entire body), or 'replace-island' (update a run-specific section).",
                    "enum": [
                      "replace",
                      "append",
                      "prepend",
                      "replace-island"
                    ],
                    "type": "string"
                  },
                  "status": {
                    "description": "New issue status: 'open' to reopen a closed issue, 'closed' to close an open issue.",
                    "enum": [
                      "open",
                      "closed"
                    ],
                    "type": "string"
                  },
                  "title": {
                    "description": "New issue title to replace the existing title.",
                    "type": "string"
                  }
                },
                "type": "object"
              },
              "name": "update_issue"
            },
            {
              "description": "Push committed changes to a pull request's branch. Use this to add follow-up commits to an existing PR, such as addressing review feedback or fixing issues. Changes must be committed locally before calling this tool. CONSTRAINTS: Maximum 4 push(es) can be made.",
              "inputSchema": {
                "additionalProperties": false,
                "properties": {
                  "branch": {
                    "description": "Branch name to push changes from. If omitted, uses the current working branch. Only specify if you need to push from a different branch.",
                    "type": "string"
                  },
                  "message": {
                    "description": "Commit message describing the changes. Follow repository commit message conventions (e.g., conventional commits).",
                    "type": "string"
                  },
                  "pull_request_number": {
                    "description": "Pull request number to push changes to. This is the numeric ID from the GitHub URL (e.g., 654 in github.com/owner/repo/pull/654). Required when the workflow target is '*' (any PR).",
                    "type": [
                      "number",
                      "string"
                    ]
                  }
                },
                "required": [
                  "message"
                ],
                "type": "object"
              },
              "name": "push_to_pull_request_branch"
            },
            {
              "description": "Report that a tool or capability needed to complete the task is not available, or share any information you deem important about missing functionality or limitations. Use this when you cannot accomplish what was requested because the required functionality is missing or access is restricted.",
              "inputSchema": {
                "additionalProperties": false,
                "properties": {
                  "alternatives": {
                    "description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
                    "type": "string"
                  },
                  "reason": {
                    "description": "Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).",
                    "type": "string"
                  },
                  "tool": {
                    "description": "Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.",
                    "type": "string"
                  }
                },
                "required": [
                  "reason"
                ],
                "type": "object"
              },
              "name": "missing_tool"
            },
            {
              "description": "Log a transparency message when no significant actions are needed. Use this to confirm workflow completion and provide visibility when analysis is complete but no changes or outputs are required (e.g., 'No issues found', 'All checks passed'). This ensures the workflow produces human-visible output even when no other actions are taken.",
              "inputSchema": {
                "additionalProperties": false,
                "properties": {
                  "message": {
                    "description": "Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').",
                    "type": "string"
                  }
                },
                "required": [
                  "message"
                ],
                "type": "object"
              },
              "name": "noop"
            },
            {
              "description": "Report that data or information needed to complete the task is not available. Use this when you cannot accomplish what was requested because required data, context, or information is missing.",
              "inputSchema": {
                "additionalProperties": false,
                "properties": {
                  "alternatives": {
                    "description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
                    "type": "string"
                  },
                  "context": {
                    "description": "Additional context about the missing data or where it should come from (max 256 characters).",
                    "type": "string"
                  },
                  "data_type": {
                    "description": "Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.",
                    "type": "string"
                  },
                  "reason": {
                    "description": "Explanation of why this data is needed to complete the task (max 256 characters).",
                    "type": "string"
                  }
                },
                "required": [],
                "type": "object"
              },
              "name": "missing_data"
            }
          ]
          GH_AW_SAFE_OUTPUTS_TOOLS_EOF
          cat > /opt/gh-aw/safeoutputs/validation.json << 'GH_AW_SAFE_OUTPUTS_VALIDATION_EOF'
          {
            "add_comment": {
              "defaultMax": 1,
              "fields": {
                "body": {
                  "required": true,
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 65000
                },
                "item_number": {
                  "issueOrPRNumber": true
                },
                "repo": {
                  "type": "string",
                  "maxLength": 256
                }
              }
            },
            "add_labels": {
              "defaultMax": 5,
              "fields": {
                "item_number": {
                  "issueOrPRNumber": true
                },
                "labels": {
                  "required": true,
                  "type": "array",
                  "itemType": "string",
                  "itemSanitize": true,
                  "itemMaxLength": 128
                },
                "repo": {
                  "type": "string",
                  "maxLength": 256
                }
              }
            },
            "create_issue": {
              "defaultMax": 1,
              "fields": {
                "body": {
                  "required": true,
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 65000
                },
                "labels": {
                  "type": "array",
                  "itemType": "string",
                  "itemSanitize": true,
                  "itemMaxLength": 128
                },
                "parent": {
                  "issueOrPRNumber": true
                },
                "repo": {
                  "type": "string",
                  "maxLength": 256
                },
                "temporary_id": {
                  "type": "string"
                },
                "title": {
                  "required": true,
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 128
                }
              }
            },
            "create_pull_request": {
              "defaultMax": 1,
              "fields": {
                "body": {
                  "required": true,
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 65000
                },
                "branch": {
                  "required": true,
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 256
                },
                "draft": {
                  "type": "boolean"
                },
                "labels": {
                  "type": "array",
                  "itemType": "string",
                  "itemSanitize": true,
                  "itemMaxLength": 128
                },
                "repo": {
                  "type": "string",
                  "maxLength": 256
                },
                "title": {
                  "required": true,
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 128
                }
              }
            },
            "missing_data": {
              "defaultMax": 20,
              "fields": {
                "alternatives": {
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 256
                },
                "context": {
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 256
                },
                "data_type": {
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 128
                },
                "reason": {
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 256
                }
              }
            },
            "missing_tool": {
              "defaultMax": 20,
              "fields": {
                "alternatives": {
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 512
                },
                "reason": {
                  "required": true,
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 256
                },
                "tool": {
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 128
                }
              }
            },
            "noop": {
              "defaultMax": 1,
              "fields": {
                "message": {
                  "required": true,
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 65000
                }
              }
            },
            "push_to_pull_request_branch": {
              "defaultMax": 1,
              "fields": {
                "branch": {
                  "required": true,
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 256
                },
                "message": {
                  "required": true,
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 65000
                },
                "pull_request_number": {
                  "issueOrPRNumber": true
                }
              }
            },
            "remove_labels": {
              "defaultMax": 5,
              "fields": {
                "item_number": {
                  "issueOrPRNumber": true
                },
                "labels": {
                  "required": true,
                  "type": "array",
                  "itemType": "string",
                  "itemSanitize": true,
                  "itemMaxLength": 128
                },
                "repo": {
                  "type": "string",
                  "maxLength": 256
                }
              }
            },
            "update_issue": {
              "defaultMax": 1,
              "fields": {
                "assignees": {
                  "type": "array",
                  "itemType": "string",
                  "itemSanitize": true,
                  "itemMaxLength": 39
                },
                "body": {
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 65000
                },
                "issue_number": {
                  "issueOrPRNumber": true
                },
                "labels": {
                  "type": "array",
                  "itemType": "string",
                  "itemSanitize": true,
                  "itemMaxLength": 128
                },
                "milestone": {
                  "optionalPositiveInteger": true
                },
                "operation": {
                  "type": "string",
                  "enum": [
                    "replace",
                    "append",
                    "prepend",
                    "replace-island"
                  ]
                },
                "repo": {
                  "type": "string",
                  "maxLength": 256
                },
                "status": {
                  "type": "string",
                  "enum": [
                    "open",
                    "closed"
                  ]
                },
                "title": {
                  "type": "string",
                  "sanitize": true,
                  "maxLength": 128
                }
              },
              "customValidation": "requiresOneOf:status,title,body"
            }
          }
          GH_AW_SAFE_OUTPUTS_VALIDATION_EOF
      - name: Generate Safe Outputs MCP Server Config
        id: safe-outputs-config
        run: |
          # Generate a secure random API key (360 bits of entropy, 40+ chars)
          # Mask immediately to prevent timing vulnerabilities
          API_KEY=$(openssl rand -base64 45 | tr -d '/+=')
          echo "::add-mask::${API_KEY}"
          
          PORT=3001
          
          # Set outputs for next steps
          {
            echo "safe_outputs_api_key=${API_KEY}"
            echo "safe_outputs_port=${PORT}"
          } >> "$GITHUB_OUTPUT"
          
          echo "Safe Outputs MCP server will run on port ${PORT}"
          
      - name: Start Safe Outputs MCP HTTP Server
        id: safe-outputs-start
        env:
          DEBUG: '*'
          GH_AW_SAFE_OUTPUTS_PORT: ${{ steps.safe-outputs-config.outputs.safe_outputs_port }}
          GH_AW_SAFE_OUTPUTS_API_KEY: ${{ steps.safe-outputs-config.outputs.safe_outputs_api_key }}
          GH_AW_SAFE_OUTPUTS_TOOLS_PATH: /opt/gh-aw/safeoutputs/tools.json
          GH_AW_SAFE_OUTPUTS_CONFIG_PATH: /opt/gh-aw/safeoutputs/config.json
          GH_AW_MCP_LOG_DIR: /tmp/gh-aw/mcp-logs/safeoutputs
        run: |
          # Environment variables are set above to prevent template injection
          export DEBUG
          export GH_AW_SAFE_OUTPUTS_PORT
          export GH_AW_SAFE_OUTPUTS_API_KEY
          export GH_AW_SAFE_OUTPUTS_TOOLS_PATH
          export GH_AW_SAFE_OUTPUTS_CONFIG_PATH
          export GH_AW_MCP_LOG_DIR
          
          bash /opt/gh-aw/actions/start_safe_outputs_server.sh
          
      - name: Start MCP Gateway
        id: start-mcp-gateway
        env:
          GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
          GH_AW_SAFE_OUTPUTS_API_KEY: ${{ steps.safe-outputs-start.outputs.api_key }}
          GH_AW_SAFE_OUTPUTS_PORT: ${{ steps.safe-outputs-start.outputs.port }}
          GITHUB_MCP_LOCKDOWN: ${{ steps.determine-automatic-lockdown.outputs.lockdown == 'true' && '1' || '0' }}
          GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
        run: |
          set -eo pipefail
          mkdir -p /tmp/gh-aw/mcp-config
          
          # Export gateway environment variables for MCP config and gateway script
          export MCP_GATEWAY_PORT="80"
          export MCP_GATEWAY_DOMAIN="host.docker.internal"
          MCP_GATEWAY_API_KEY=$(openssl rand -base64 45 | tr -d '/+=')
          echo "::add-mask::${MCP_GATEWAY_API_KEY}"
          export MCP_GATEWAY_API_KEY
          export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads"
          mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}"
          export DEBUG="*"
          
          export GH_AW_ENGINE="copilot"
          export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.5'
          
          mkdir -p /home/runner/.copilot
          cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh
          {
            "mcpServers": {
              "github": {
                "type": "stdio",
                "container": "ghcr.io/github/github-mcp-server:v0.31.0",
                "env": {
                  "GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
                  "GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
                  "GITHUB_READ_ONLY": "1",
                  "GITHUB_TOOLSETS": "all"
                }
              },
              "safeoutputs": {
                "type": "http",
                "url": "http://host.docker.internal:$GH_AW_SAFE_OUTPUTS_PORT",
                "headers": {
                  "Authorization": "\${GH_AW_SAFE_OUTPUTS_API_KEY}"
                }
              }
            },
            "gateway": {
              "port": $MCP_GATEWAY_PORT,
              "domain": "${MCP_GATEWAY_DOMAIN}",
              "apiKey": "${MCP_GATEWAY_API_KEY}",
              "payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}"
            }
          }
          GH_AW_MCP_CONFIG_EOF
      - name: Generate workflow overview
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        with:
          script: |
            const { generateWorkflowOverview } = require('/opt/gh-aw/actions/generate_workflow_overview.cjs');
            await generateWorkflowOverview(core);
      - name: Download prompt artifact
        uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
        with:
          name: prompt
          path: /tmp/gh-aw/aw-prompts
      - name: Clean git credentials
        run: bash /opt/gh-aw/actions/clean_git_credentials.sh
      - name: Execute GitHub Copilot CLI
        id: agentic_execution
        # Copilot CLI tool arguments (sorted):
        timeout-minutes: 60
        run: |
          set -o pipefail
          sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "*.jsr.io,*.pythonhosted.org,adoptium.net,anaconda.org,api.adoptium.net,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.foojay.io,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.npms.io,api.nuget.org,api.snapcraft.io,archive.apache.org,archive.ubuntu.com,azure.archive.ubuntu.com,azuresearch-usnc.nuget.org,azuresearch-ussc.nuget.org,binstar.org,bootstrap.pypa.io,builds.dotnet.microsoft.com,bun.sh,cdn.azul.com,cdn.jsdelivr.net,central.sonatype.com,ci.dot.net,conda.anaconda.org,conda.binstar.org,crates.io,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,dc.services.visualstudio.com,deb.nodesource.com,deno.land,dist.nuget.org,dl.google.com,dlcdn.apache.org,dot.net,dotnet.microsoft.com,dotnetcli.blob.core.windows.net,download.eclipse.org,download.java.net,download.oracle.com,downloads.gradle-dn.com,esm.sh,files.pythonhosted.org,get.pnpm.io,github.com,googleapis.deno.dev,googlechromelabs.github.io,gradle.org,host.docker.internal,index.crates.io,jcenter.bintray.com,jdk.java.net,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,maven.apache.org,maven.google.com,maven.oracle.com,maven.pkg.github.com,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,nuget.org,nuget.pkg.github.com,nugetregistryv2prod.blob.core.windows.net,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,oneocsp.microsoft.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,pip.pypa.io,pkgs.dev.azure.com,plugins-artifacts.gradle.org,plugins.gradle.org,ppa.launchpad.net,pypi.org,pypi.python.org,raw.githubusercontent.com,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.anaconda.com,repo.continuum.io,repo.gradle.org,repo.grails.org,repo.maven.apache.org,repo.spring.io,repo.yarnpkg.com,repo1.maven.org,s.symcb.com,s.symcd.com,security.ubuntu.com,services.gradle.org,sh.rustup.rs,skimdb.npmjs.com,static.crates.io,static.rust-lang.org,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.java.com,www.microsoft.com,www.npmjs.com,www.npmjs.org,yarnpkg.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
            -- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_AGENT_COPILOT:+ --model "$GH_AW_MODEL_AGENT_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
        env:
          COPILOT_AGENT_RUNNER_TYPE: STANDALONE
          COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
          GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
          GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
          GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
          GITHUB_HEAD_REF: ${{ github.head_ref }}
          GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
          GITHUB_REF_NAME: ${{ github.ref_name }}
          GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
          GITHUB_WORKSPACE: ${{ github.workspace }}
          XDG_CONFIG_HOME: /home/runner
      - name: Configure Git credentials
        env:
          REPO_NAME: ${{ github.repository }}
          SERVER_URL: ${{ github.server_url }}
        run: |
          git config --global user.email "github-actions[bot]@users.noreply.github.com"
          git config --global user.name "github-actions[bot]"
          git config --global am.keepcr true
          # Re-authenticate git with GitHub token
          SERVER_URL_STRIPPED="${SERVER_URL#https://}"
          git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
          echo "Git configured with standard GitHub Actions identity"
      - name: Copy Copilot session state files to logs
        if: always()
        continue-on-error: true
        run: |
          # Copy Copilot session state files to logs folder for artifact collection
          # This ensures they are in /tmp/gh-aw/ where secret redaction can scan them
          SESSION_STATE_DIR="$HOME/.copilot/session-state"
          LOGS_DIR="/tmp/gh-aw/sandbox/agent/logs"
          
          if [ -d "$SESSION_STATE_DIR" ]; then
            echo "Copying Copilot session state files from $SESSION_STATE_DIR to $LOGS_DIR"
            mkdir -p "$LOGS_DIR"
            cp -v "$SESSION_STATE_DIR"/*.jsonl "$LOGS_DIR/" 2>/dev/null || true
            echo "Session state files copied successfully"
          else
            echo "No session-state directory found at $SESSION_STATE_DIR"
          fi
      - name: Stop MCP Gateway
        if: always()
        continue-on-error: true
        env:
          MCP_GATEWAY_PORT: ${{ steps.start-mcp-gateway.outputs.gateway-port }}
          MCP_GATEWAY_API_KEY: ${{ steps.start-mcp-gateway.outputs.gateway-api-key }}
          GATEWAY_PID: ${{ steps.start-mcp-gateway.outputs.gateway-pid }}
        run: |
          bash /opt/gh-aw/actions/stop_mcp_gateway.sh "$GATEWAY_PID"
      - name: Redact secrets in logs
        if: always()
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        with:
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/redact_secrets.cjs');
            await main();
        env:
          GH_AW_SECRET_NAMES: 'COPILOT_GITHUB_TOKEN,GH_AW_GITHUB_MCP_SERVER_TOKEN,GH_AW_GITHUB_TOKEN,GITHUB_TOKEN'
          SECRET_COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
          SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
          SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
          SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
      - name: Upload Safe Outputs
        if: always()
        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
        with:
          name: safe-output
          path: ${{ env.GH_AW_SAFE_OUTPUTS }}
          if-no-files-found: warn
      - name: Ingest agent output
        id: collect_output
        if: always()
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
          GH_AW_ALLOWED_DOMAINS: "*.jsr.io,*.pythonhosted.org,adoptium.net,anaconda.org,api.adoptium.net,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.foojay.io,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.npms.io,api.nuget.org,api.snapcraft.io,archive.apache.org,archive.ubuntu.com,azure.archive.ubuntu.com,azuresearch-usnc.nuget.org,azuresearch-ussc.nuget.org,binstar.org,bootstrap.pypa.io,builds.dotnet.microsoft.com,bun.sh,cdn.azul.com,cdn.jsdelivr.net,central.sonatype.com,ci.dot.net,conda.anaconda.org,conda.binstar.org,crates.io,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,dc.services.visualstudio.com,deb.nodesource.com,deno.land,dist.nuget.org,dl.google.com,dlcdn.apache.org,dot.net,dotnet.microsoft.com,dotnetcli.blob.core.windows.net,download.eclipse.org,download.java.net,download.oracle.com,downloads.gradle-dn.com,esm.sh,files.pythonhosted.org,get.pnpm.io,github.com,googleapis.deno.dev,googlechromelabs.github.io,gradle.org,host.docker.internal,index.crates.io,jcenter.bintray.com,jdk.java.net,json-schema.org,json.schemastore.org,jsr.io,keyserver.ubuntu.com,maven.apache.org,maven.google.com,maven.oracle.com,maven.pkg.github.com,nodejs.org,npm.pkg.github.com,npmjs.com,npmjs.org,nuget.org,nuget.pkg.github.com,nugetregistryv2prod.blob.core.windows.net,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,oneocsp.microsoft.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,pip.pypa.io,pkgs.dev.azure.com,plugins-artifacts.gradle.org,plugins.gradle.org,ppa.launchpad.net,pypi.org,pypi.python.org,raw.githubusercontent.com,registry.bower.io,registry.npmjs.com,registry.npmjs.org,registry.yarnpkg.com,repo.anaconda.com,repo.continuum.io,repo.gradle.org,repo.grails.org,repo.maven.apache.org,repo.spring.io,repo.yarnpkg.com,repo1.maven.org,s.symcb.com,s.symcd.com,security.ubuntu.com,services.gradle.org,sh.rustup.rs,skimdb.npmjs.com,static.crates.io,static.rust-lang.org,storage.googleapis.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.java.com,www.microsoft.com,www.npmjs.com,www.npmjs.org,yarnpkg.com"
          GITHUB_SERVER_URL: ${{ github.server_url }}
          GITHUB_API_URL: ${{ github.api_url }}
          GH_AW_COMMAND: repo-assist
        with:
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/collect_ndjson_output.cjs');
            await main();
      - name: Upload sanitized agent output
        if: always() && env.GH_AW_AGENT_OUTPUT
        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
        with:
          name: agent-output
          path: ${{ env.GH_AW_AGENT_OUTPUT }}
          if-no-files-found: warn
      - name: Upload engine output files
        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
        with:
          name: agent_outputs
          path: |
            /tmp/gh-aw/sandbox/agent/logs/
            /tmp/gh-aw/redacted-urls.log
          if-no-files-found: ignore
      - name: Parse agent logs for step summary
        if: always()
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_AGENT_OUTPUT: /tmp/gh-aw/sandbox/agent/logs/
        with:
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/parse_copilot_log.cjs');
            await main();
      - name: Parse MCP Gateway logs for step summary
        if: always()
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        with:
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/parse_mcp_gateway_log.cjs');
            await main();
      - name: Print firewall logs
        if: always()
        continue-on-error: true
        env:
          AWF_LOGS_DIR: /tmp/gh-aw/sandbox/firewall/logs
        run: |
          # Fix permissions on firewall logs so they can be uploaded as artifacts
          # AWF runs with sudo, creating files owned by root
          sudo chmod -R a+r /tmp/gh-aw/sandbox/firewall/logs 2>/dev/null || true
          # Only run awf logs summary if awf command exists (it may not be installed if workflow failed before install step)
          if command -v awf &> /dev/null; then
            awf logs summary | tee -a "$GITHUB_STEP_SUMMARY"
          else
            echo 'AWF binary not installed, skipping firewall log summary'
          fi
      # Upload repo memory as artifacts for push job
      - name: Upload repo-memory artifact (default)
        if: always()
        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
        with:
          name: repo-memory-default
          path: /tmp/gh-aw/repo-memory/default
          retention-days: 1
          if-no-files-found: ignore
      - name: Upload agent artifacts
        if: always()
        continue-on-error: true
        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
        with:
          name: agent-artifacts
          path: |
            /tmp/gh-aw/aw-prompts/prompt.txt
            /tmp/gh-aw/aw_info.json
            /tmp/gh-aw/mcp-logs/
            /tmp/gh-aw/sandbox/firewall/logs/
            /tmp/gh-aw/agent-stdio.log
            /tmp/gh-aw/agent/
            /tmp/gh-aw/aw-*.patch
          if-no-files-found: ignore
      # --- Threat Detection (inline) ---
      - name: Check if detection needed
        id: detection_guard
        if: always()
        env:
          OUTPUT_TYPES: ${{ steps.collect_output.outputs.output_types }}
          HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
        run: |
          if [[ -n "$OUTPUT_TYPES" || "$HAS_PATCH" == "true" ]]; then
            echo "run_detection=true" >> "$GITHUB_OUTPUT"
            echo "Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH"
          else
            echo "run_detection=false" >> "$GITHUB_OUTPUT"
            echo "Detection skipped: no agent outputs or patches to analyze"
          fi
      - name: Clear MCP configuration for detection
        if: always() && steps.detection_guard.outputs.run_detection == 'true'
        run: |
          rm -f /tmp/gh-aw/mcp-config/mcp-servers.json
          rm -f /home/runner/.copilot/mcp-config.json
          rm -f "$GITHUB_WORKSPACE/.gemini/settings.json"
      - name: Prepare threat detection files
        if: always() && steps.detection_guard.outputs.run_detection == 'true'
        run: |
          mkdir -p /tmp/gh-aw/threat-detection/aw-prompts
          cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true
          cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true
          for f in /tmp/gh-aw/aw-*.patch; do
            [ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true
          done
          echo "Prepared threat detection files:"
          ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true
      - name: Setup threat detection
        if: always() && steps.detection_guard.outputs.run_detection == 'true'
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          WORKFLOW_NAME: "Repo Assist"
          WORKFLOW_DESCRIPTION: "A friendly repository assistant that runs daily to support contributors and maintainers.\nCan also be triggered on-demand via '/repo-assist <instructions>' to perform specific tasks.\n- Comments helpfully on open issues to unblock contributors and onboard newcomers\n- Identifies issues that can be fixed and creates draft pull requests with fixes\n- Studies the codebase and proposes improvements via PRs\n- Updates its own PRs when CI fails or merge conflicts arise\n- Nudges stale PRs waiting for author response\n- Manages issue and PR labels for organization\n- Prepares releases by updating changelogs and proposing version bumps\n- Welcomes new contributors with friendly onboarding\n- Maintains a persistent memory of work done and what remains\nAlways polite, constructive, and mindful of the project's goals."
          HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
        with:
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
            await main();
      - name: Ensure threat-detection directory and log
        if: always() && steps.detection_guard.outputs.run_detection == 'true'
        run: |
          mkdir -p /tmp/gh-aw/threat-detection
          touch /tmp/gh-aw/threat-detection/detection.log
      - name: Execute GitHub Copilot CLI
        if: always() && steps.detection_guard.outputs.run_detection == 'true'
        id: detection_agentic_execution
        # Copilot CLI tool arguments (sorted):
        # --allow-tool shell(cat)
        # --allow-tool shell(grep)
        # --allow-tool shell(head)
        # --allow-tool shell(jq)
        # --allow-tool shell(ls)
        # --allow-tool shell(tail)
        # --allow-tool shell(wc)
        timeout-minutes: 20
        run: |
          set -o pipefail
          sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
            -- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_DETECTION_COPILOT:+ --model "$GH_AW_MODEL_DETECTION_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
        env:
          COPILOT_AGENT_RUNNER_TYPE: STANDALONE
          COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
          GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
          GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
          GITHUB_HEAD_REF: ${{ github.head_ref }}
          GITHUB_REF_NAME: ${{ github.ref_name }}
          GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
          GITHUB_WORKSPACE: ${{ github.workspace }}
          XDG_CONFIG_HOME: /home/runner
      - name: Parse threat detection results
        id: parse_detection_results
        if: always() && steps.detection_guard.outputs.run_detection == 'true'
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        with:
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
            await main();
      - name: Upload threat detection log
        if: always() && steps.detection_guard.outputs.run_detection == 'true'
        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
        with:
          name: threat-detection.log
          path: /tmp/gh-aw/threat-detection/detection.log
          if-no-files-found: ignore
      - name: Set detection conclusion
        id: detection_conclusion
        if: always()
        env:
          RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }}
          DETECTION_SUCCESS: ${{ steps.parse_detection_results.outputs.success }}
        run: |
          if [[ "$RUN_DETECTION" != "true" ]]; then
            echo "conclusion=skipped" >> "$GITHUB_OUTPUT"
            echo "success=true" >> "$GITHUB_OUTPUT"
            echo "Detection was not needed, marking as skipped"
          elif [[ "$DETECTION_SUCCESS" == "true" ]]; then
            echo "conclusion=success" >> "$GITHUB_OUTPUT"
            echo "success=true" >> "$GITHUB_OUTPUT"
            echo "Detection passed successfully"
          else
            echo "conclusion=failure" >> "$GITHUB_OUTPUT"
            echo "success=false" >> "$GITHUB_OUTPUT"
            echo "Detection found issues"
          fi

  conclusion:
    needs:
      - activation
      - agent
      - push_repo_memory
      - safe_outputs
    if: (always()) && (needs.agent.result != 'skipped')
    runs-on: ubuntu-slim
    permissions:
      contents: write
      discussions: write
      issues: write
      pull-requests: write
    outputs:
      noop_message: ${{ steps.noop.outputs.noop_message }}
      tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
      total_count: ${{ steps.missing_tool.outputs.total_count }}
    steps:
      - name: Setup Scripts
        uses: github/gh-aw/actions/setup@e32435511ac2c5aa0e08b19284a25dc98fadf1e1 # v0.50.2
        with:
          destination: /opt/gh-aw/actions
      - name: Download agent output artifact
        continue-on-error: true
        uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
        with:
          name: agent-output
          path: /tmp/gh-aw/safeoutputs/
      - name: Setup agent output environment variable
        run: |
          mkdir -p /tmp/gh-aw/safeoutputs/
          find "/tmp/gh-aw/safeoutputs/" -type f -print
          echo "GH_AW_AGENT_OUTPUT=/tmp/gh-aw/safeoutputs/agent_output.json" >> "$GITHUB_ENV"
      - name: Process No-Op Messages
        id: noop
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
          GH_AW_NOOP_MAX: "1"
          GH_AW_WORKFLOW_NAME: "Repo Assist"
          GH_AW_WORKFLOW_SOURCE: "githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd"
          GH_AW_WORKFLOW_SOURCE_URL: "${{ github.server_url }}/githubnext/agentics/tree/ee49512da7887942965ac0a0e48357106313c9dd/workflows/repo-assist.md"
        with:
          github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/noop.cjs');
            await main();
      - name: Record Missing Tool
        id: missing_tool
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
          GH_AW_WORKFLOW_NAME: "Repo Assist"
          GH_AW_WORKFLOW_SOURCE: "githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd"
          GH_AW_WORKFLOW_SOURCE_URL: "${{ github.server_url }}/githubnext/agentics/tree/ee49512da7887942965ac0a0e48357106313c9dd/workflows/repo-assist.md"
        with:
          github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/missing_tool.cjs');
            await main();
      - name: Handle Agent Failure
        id: handle_agent_failure
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
          GH_AW_WORKFLOW_NAME: "Repo Assist"
          GH_AW_WORKFLOW_SOURCE: "githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd"
          GH_AW_WORKFLOW_SOURCE_URL: "${{ github.server_url }}/githubnext/agentics/tree/ee49512da7887942965ac0a0e48357106313c9dd/workflows/repo-assist.md"
          GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
          GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
          GH_AW_WORKFLOW_ID: "repo-assist"
          GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.agent.outputs.secret_verification_result }}
          GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}
          GH_AW_CODE_PUSH_FAILURE_ERRORS: ${{ needs.safe_outputs.outputs.code_push_failure_errors }}
          GH_AW_CODE_PUSH_FAILURE_COUNT: ${{ needs.safe_outputs.outputs.code_push_failure_count }}
          GH_AW_REPO_MEMORY_VALIDATION_FAILED_default: ${{ needs.push_repo_memory.outputs.validation_failed_default }}
          GH_AW_REPO_MEMORY_VALIDATION_ERROR_default: ${{ needs.push_repo_memory.outputs.validation_error_default }}
          GH_AW_GROUP_REPORTS: "false"
        with:
          github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/handle_agent_failure.cjs');
            await main();
      - name: Handle No-Op Message
        id: handle_noop_message
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
          GH_AW_WORKFLOW_NAME: "Repo Assist"
          GH_AW_WORKFLOW_SOURCE: "githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd"
          GH_AW_WORKFLOW_SOURCE_URL: "${{ github.server_url }}/githubnext/agentics/tree/ee49512da7887942965ac0a0e48357106313c9dd/workflows/repo-assist.md"
          GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
          GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
          GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
          GH_AW_NOOP_REPORT_AS_ISSUE: "true"
        with:
          github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/handle_noop_message.cjs');
            await main();
      - name: Handle Create Pull Request Error
        id: handle_create_pr_error
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
          GH_AW_WORKFLOW_NAME: "Repo Assist"
          GH_AW_WORKFLOW_SOURCE: "githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd"
          GH_AW_WORKFLOW_SOURCE_URL: "${{ github.server_url }}/githubnext/agentics/tree/ee49512da7887942965ac0a0e48357106313c9dd/workflows/repo-assist.md"
          GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
        with:
          github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/handle_create_pr_error.cjs');
            await main();

  pre_activation:
    if: >
      ((github.event_name == 'issues' || github.event_name == 'issue_comment' || github.event_name == 'pull_request' ||
      github.event_name == 'pull_request_review_comment' || github.event_name == 'discussion' || github.event_name == 'discussion_comment') &&
      ((github.event_name == 'issues') && ((startsWith(github.event.issue.body, '/repo-assist ')) || (github.event.issue.body == '/repo-assist')) ||
      (github.event_name == 'issue_comment') && (((startsWith(github.event.comment.body, '/repo-assist ')) ||
      (github.event.comment.body == '/repo-assist')) && (github.event.issue.pull_request == null)) || (github.event_name == 'issue_comment') &&
      (((startsWith(github.event.comment.body, '/repo-assist ')) || (github.event.comment.body == '/repo-assist')) &&
      (github.event.issue.pull_request != null)) || (github.event_name == 'pull_request_review_comment') &&
      ((startsWith(github.event.comment.body, '/repo-assist ')) || (github.event.comment.body == '/repo-assist')) ||
      (github.event_name == 'pull_request') && ((startsWith(github.event.pull_request.body, '/repo-assist ')) ||
      (github.event.pull_request.body == '/repo-assist')) || (github.event_name == 'discussion') && ((startsWith(github.event.discussion.body, '/repo-assist ')) ||
      (github.event.discussion.body == '/repo-assist')) || (github.event_name == 'discussion_comment') && ((startsWith(github.event.comment.body, '/repo-assist ')) ||
      (github.event.comment.body == '/repo-assist')))) || (!(github.event_name == 'issues' || github.event_name == 'issue_comment' ||
      github.event_name == 'pull_request' || github.event_name == 'pull_request_review_comment' || github.event_name == 'discussion' ||
      github.event_name == 'discussion_comment'))
    runs-on: ubuntu-slim
    permissions:
      discussions: write
      issues: write
      pull-requests: write
    outputs:
      activated: ${{ (steps.check_membership.outputs.is_team_member == 'true') && (steps.check_command_position.outputs.command_position_ok == 'true') }}
      matched_command: ${{ steps.check_command_position.outputs.matched_command }}
    steps:
      - name: Setup Scripts
        uses: github/gh-aw/actions/setup@e32435511ac2c5aa0e08b19284a25dc98fadf1e1 # v0.50.2
        with:
          destination: /opt/gh-aw/actions
      - name: Add eyes reaction for immediate feedback
        id: react
        if: github.event_name == 'issues' || github.event_name == 'issue_comment' || github.event_name == 'pull_request_review_comment' || github.event_name == 'discussion' || github.event_name == 'discussion_comment' || (github.event_name == 'pull_request') && (github.event.pull_request.head.repo.id == github.repository_id)
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_REACTION: "eyes"
        with:
          github-token: ${{ secrets.GITHUB_TOKEN }}
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/add_reaction.cjs');
            await main();
      - name: Check team membership for command workflow
        id: check_membership
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_REQUIRED_ROLES: admin,maintainer,write
        with:
          github-token: ${{ secrets.GITHUB_TOKEN }}
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/check_membership.cjs');
            await main();
      - name: Check command position
        id: check_command_position
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_COMMANDS: "[\"repo-assist\"]"
        with:
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/check_command_position.cjs');
            await main();

  push_repo_memory:
    needs: agent
    if: always() && needs.agent.outputs.detection_success == 'true'
    runs-on: ubuntu-latest
    permissions:
      contents: write
    outputs:
      validation_error_default: ${{ steps.push_repo_memory_default.outputs.validation_error }}
      validation_failed_default: ${{ steps.push_repo_memory_default.outputs.validation_failed }}
    steps:
      - name: Setup Scripts
        uses: github/gh-aw/actions/setup@e32435511ac2c5aa0e08b19284a25dc98fadf1e1 # v0.50.2
        with:
          destination: /opt/gh-aw/actions
      - name: Checkout repository
        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
        with:
          persist-credentials: false
          sparse-checkout: .
      - name: Configure Git credentials
        env:
          REPO_NAME: ${{ github.repository }}
          SERVER_URL: ${{ github.server_url }}
        run: |
          git config --global user.email "github-actions[bot]@users.noreply.github.com"
          git config --global user.name "github-actions[bot]"
          git config --global am.keepcr true
          # Re-authenticate git with GitHub token
          SERVER_URL_STRIPPED="${SERVER_URL#https://}"
          git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
          echo "Git configured with standard GitHub Actions identity"
      - name: Download repo-memory artifact (default)
        uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
        continue-on-error: true
        with:
          name: repo-memory-default
          path: /tmp/gh-aw/repo-memory/default
      - name: Push repo-memory changes (default)
        id: push_repo_memory_default
        if: always()
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_TOKEN: ${{ github.token }}
          GITHUB_RUN_ID: ${{ github.run_id }}
          GITHUB_SERVER_URL: ${{ github.server_url }}
          ARTIFACT_DIR: /tmp/gh-aw/repo-memory/default
          MEMORY_ID: default
          TARGET_REPO: ${{ github.repository }}
          BRANCH_NAME: memory/repo-assist
          MAX_FILE_SIZE: 10240
          MAX_FILE_COUNT: 100
          MAX_PATCH_SIZE: 10240
          ALLOWED_EXTENSIONS: '[]'
        with:
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/push_repo_memory.cjs');
            await main();

  safe_outputs:
    needs:
      - activation
      - agent
    if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.agent.outputs.detection_success == 'true')
    runs-on: ubuntu-slim
    permissions:
      contents: write
      discussions: write
      issues: write
      pull-requests: write
    timeout-minutes: 15
    env:
      GH_AW_ENGINE_ID: "copilot"
      GH_AW_WORKFLOW_ID: "repo-assist"
      GH_AW_WORKFLOW_NAME: "Repo Assist"
      GH_AW_WORKFLOW_SOURCE: "githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd"
      GH_AW_WORKFLOW_SOURCE_URL: "${{ github.server_url }}/githubnext/agentics/tree/ee49512da7887942965ac0a0e48357106313c9dd/workflows/repo-assist.md"
    outputs:
      code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}
      code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}
      create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }}
      create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }}
      process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }}
      process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
    steps:
      - name: Setup Scripts
        uses: github/gh-aw/actions/setup@e32435511ac2c5aa0e08b19284a25dc98fadf1e1 # v0.50.2
        with:
          destination: /opt/gh-aw/actions
      - name: Download agent output artifact
        continue-on-error: true
        uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
        with:
          name: agent-output
          path: /tmp/gh-aw/safeoutputs/
      - name: Setup agent output environment variable
        run: |
          mkdir -p /tmp/gh-aw/safeoutputs/
          find "/tmp/gh-aw/safeoutputs/" -type f -print
          echo "GH_AW_AGENT_OUTPUT=/tmp/gh-aw/safeoutputs/agent_output.json" >> "$GITHUB_ENV"
      - name: Download patch artifact
        continue-on-error: true
        uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
        with:
          name: agent-artifacts
          path: /tmp/gh-aw/
      - name: Checkout repository
        if: (((!cancelled()) && (needs.agent.result != 'skipped')) && (contains(needs.agent.outputs.output_types, 'create_pull_request'))) || (((!cancelled()) && (needs.agent.result != 'skipped')) && (contains(needs.agent.outputs.output_types, 'push_to_pull_request_branch')))
        uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
        with:
          ref: ${{ github.base_ref || github.ref_name }}
          token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
          persist-credentials: false
          fetch-depth: 1
      - name: Configure Git credentials
        if: (((!cancelled()) && (needs.agent.result != 'skipped')) && (contains(needs.agent.outputs.output_types, 'create_pull_request'))) || (((!cancelled()) && (needs.agent.result != 'skipped')) && (contains(needs.agent.outputs.output_types, 'push_to_pull_request_branch')))
        env:
          REPO_NAME: ${{ github.repository }}
          SERVER_URL: ${{ github.server_url }}
          GIT_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
        run: |
          git config --global user.email "github-actions[bot]@users.noreply.github.com"
          git config --global user.name "github-actions[bot]"
          git config --global am.keepcr true
          # Re-authenticate git with GitHub token
          SERVER_URL_STRIPPED="${SERVER_URL#https://}"
          git remote set-url origin "https://x-access-token:${GIT_TOKEN}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
          echo "Git configured with standard GitHub Actions identity"
      - name: Process Safe Outputs
        id: process_safe_outputs
        uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
        env:
          GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
          GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"hide_older_comments\":true,\"max\":10,\"target\":\"*\"},\"add_labels\":{\"allowed\":[\"bug\",\"enhancement\",\"help wanted\",\"good first issue\",\"spam\",\"off topic\",\"documentation\",\"question\",\"duplicate\",\"wontfix\",\"needs triage\",\"needs investigation\",\"breaking change\",\"performance\",\"security\",\"refactor\"],\"max\":30,\"target\":\"*\"},\"create_issue\":{\"labels\":[\"automation\",\"repo-assist\"],\"max\":4,\"title_prefix\":\"[Repo Assist] \"},\"create_pull_request\":{\"base_branch\":\"${{ github.base_ref || github.ref_name }}\",\"draft\":true,\"labels\":[\"automation\",\"repo-assist\"],\"max\":4,\"max_patch_size\":1024,\"title_prefix\":\"[Repo Assist] \"},\"missing_data\":{},\"missing_tool\":{},\"push_to_pull_request_branch\":{\"base_branch\":\"${{ github.base_ref || github.ref_name }}\",\"if_no_changes\":\"warn\",\"max\":4,\"max_patch_size\":1024,\"target\":\"*\",\"title_prefix\":\"[Repo Assist] \"},\"remove_labels\":{\"allowed\":[\"bug\",\"enhancement\",\"help wanted\",\"good first issue\",\"spam\",\"off topic\",\"documentation\",\"question\",\"duplicate\",\"wontfix\",\"needs triage\",\"needs investigation\",\"breaking change\",\"performance\",\"security\",\"refactor\"],\"max\":5,\"target\":\"*\"},\"update_issue\":{\"allow_body\":true,\"max\":1,\"target\":\"*\",\"title_prefix\":\"[Repo Assist] \"}}"
          GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }}
        with:
          github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
          script: |
            const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
            setupGlobals(core, github, context, exec, io);
            const { main } = require('/opt/gh-aw/actions/safe_output_handler_manager.cjs');
            await main();
      - name: Upload safe output items manifest
        if: always()
        uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
        with:
          name: safe-output-items
          path: /tmp/safe-output-items.jsonl
          if-no-files-found: warn



================================================
FILE: .github/workflows/repo-assist.md
================================================
---
description: |
  A friendly repository assistant that runs daily to support contributors and maintainers.
  Can also be triggered on-demand via '/repo-assist <instructions>' to perform specific tasks.
  - Comments helpfully on open issues to unblock contributors and onboard newcomers
  - Identifies issues that can be fixed and creates draft pull requests with fixes
  - Studies the codebase and proposes improvements via PRs
  - Updates its own PRs when CI fails or merge conflicts arise
  - Nudges stale PRs waiting for author response
  - Manages issue and PR labels for organization
  - Prepares releases by updating changelogs and proposing version bumps
  - Welcomes new contributors with friendly onboarding
  - Maintains a persistent memory of work done and what remains
  Always polite, constructive, and mindful of the project's goals.

on:
  schedule: daily
  workflow_dispatch:
  slash_command:
    name: repo-assist
  reaction: "eyes"

timeout-minutes: 60

permissions: read-all

network:
  allowed:
  - defaults
  - dotnet
  - node
  - python
  - rust
  - java

safe-outputs:
  add-comment:
    max: 10
    target: "*"
    hide-older-comments: true
  create-pull-request:
    draft: true
    title-prefix: "[Repo Assist] "
    labels: [automation, repo-assist]
    max: 4
  push-to-pull-request-branch:
    target: "*"
    title-prefix: "[Repo Assist] "
    max: 4
  create-issue:
    title-prefix: "[Repo Assist] "
    labels: [automation, repo-assist]
    max: 4
  update-issue:
    target: "*"
    title-prefix: "[Repo Assist] "
    max: 1
  add-labels:
    allowed: [bug, enhancement, "help wanted", "good first issue", "spam", "off topic", documentation, question, duplicate, wontfix, "needs triage", "needs investigation", "breaking change", performance, security, refactor]
    max: 30
    target: "*" 
  remove-labels:
    allowed: [bug, enhancement, "help wanted", "good first issue", "spam", "off topic", documentation, question, duplicate, wontfix, "needs triage", "needs investigation", "breaking change", performance, security, refactor]
    max: 5
    target: "*" 

tools:
  web-fetch:
  github:
    toolsets: [all]
  bash: true
  repo-memory: true

source: githubnext/agentics/workflows/repo-assist.md@ee49512da7887942965ac0a0e48357106313c9dd
engine: copilot
---

# Repo Assist

## Command Mode

Take heed of **instructions**: "${{ steps.sanitized.outputs.text }}"

If these are non-empty (not ""), then you have been triggered via `/repo-assist <instructions>`. Follow the user's instructions instead of the normal scheduled workflow. Focus exclusively on those instructions. Apply all the same guidelines (read AGENTS.md, run formatters/linters/tests, be polite, use AI disclosure). Skip the round-robin task workflow below and the reporting and instead directly do what the user requested. If no specific instructions were provided (empty or blank), proceed with the normal scheduled workflow below. 

Then exit  -  do not run the normal workflow after completing the instructions.

## Non-Command Mode

You are Repo Assist for `${{ github.repository }}`. Your job is to support human contributors, help onboard newcomers, identify improvements, and fix bugs by creating pull requests. You never merge pull requests yourself; you leave that decision to the human maintainers.

Always be:

- **Polite and encouraging**: Every contributor deserves respect. Use warm, inclusive language.
- **Concise**: Keep comments focused and actionable. Avoid walls of text.
- **Mindful of project values**: Prioritize **stability**, **correctness**, and **minimal dependencies**. Do not introduce new dependencies without clear justification.
- **Transparent about your nature**: Always clearly identify yourself as Repo Assist, an automated AI assistant. Never pretend to be a human maintainer.
- **Restrained**: When in doubt, do nothing. It is always better to stay silent than to post a redundant, unhelpful, or spammy comment. Human maintainers' attention is precious  -  do not waste it.

## Memory

Use persistent repo memory to track:

- issues already commented on (with timestamps to detect new human activity)
- fix attempts and outcomes, improvement ideas already submitted, a short to-do list
- a **backlog cursor** so each run continues where the previous one left off
- **which tasks were last run** (with timestamps) to support round-robin scheduling
- the last time you performed certain periodic tasks (dependency updates, release preparation) to enforce frequency limits
- previously checked off items (checked off by maintainer) in the Monthly Activity Summary to maintain an accurate pending actions list for maintainers

Read memory at the **start** of every run; update it at the **end**.

**Important**: Memory may not be 100% accurate. Issues may have been created, closed, or commented on; PRs may have been created, merged, commented on, or closed since the last run. Always verify memory against current repository state  -  reviewing recent activity since your last run is wise before acting on stale assumptions.

## Workflow

Use a **round-robin strategy**: each run, work on a different subset of tasks, rotating through them across runs so that all tasks get attention over time. Use memory to track which tasks were run most recently, and prioritise the ones that haven't run for the longest. Aim to do 2–4 tasks per run (plus the mandatory Task 11).

Always do Task 11 (Update Monthly Activity Summary Issue) every run. In all comments and PR descriptions, identify yourself as "Repo Assist".

### Task 1: Triage and Comment on Open Issues

1. List open issues sorted by creation date ascending (oldest first). Resume from your memory's backlog cursor; reset when you reach the end.
2. For each issue (save cursor in memory): prioritise issues that have never received a Repo Assist comment, including old backlog issues. Engage on an issue only if you have something insightful, accurate, helpful, and constructive to say. Expect to engage substantively on 1–3 issues per run; you may scan many more to find good candidates. Only re-engage on already-commented issues if new human comments have appeared since your last comment.
3. Respond based on type: bugs → ask for a reproduction or suggest a cause; feature requests → discuss feasibility; questions → answer concisely; onboarding → point to README/CONTRIBUTING. Never post vague acknowledgements, restatements, or follow-ups to your own comments.
4. Begin every comment with: `🤖 *This is an automated response from Repo Assist.*`
5. Update memory with comments made and the new cursor position.

### Task 2: Fix Issues via Pull Requests

**Only attempt fixes you are confident about.**

1. Review issues labelled `bug`, `help wanted`, or `good first issue`, plus any identified as fixable in Task 1.
2. For each fixable issue:
   a. Check memory  -  skip if you've already tried. Never create duplicate PRs.
   b. Create a fresh branch off `main`: `repo-assist/fix-issue-<N>-<desc>`.
   c. Implement a minimal, surgical fix. Do not refactor unrelated code.
   d. **Build and test (required)**: do not create a PR if the build fails or tests fail due to your changes. If tests fail due to infrastructure, create the PR but document it.
   e. Add a test for the bug if feasible; re-run tests.
   f. Create a draft PR with: AI disclosure, `Closes #N`, root cause, fix rationale, trade-offs, and a Test Status section showing build/test outcome.
   g. Post a single brief comment on the issue linking to the PR.
3. Update memory with fix attempts and outcomes.

### Task 3: Study the Codebase and Propose Improvements

**Be highly selective  -  only propose clearly beneficial, low-risk improvements.**

1. Check memory for already-submitted ideas; do not re-propose them.
2. Good candidates: API usability, performance, documentation gaps, test coverage, code clarity.
3. Create a fresh branch `repo-assist/improve-<desc>` off `main`, implement the improvement, build and test (same requirements as Task 2), then create a draft PR with AI disclosure, rationale, and Test Status section.
4. If not ready to implement, file an issue and note it in memory.
5. Update memory.

### Task 4: Update Dependencies and Engineering

1. Check for outdated dependencies. Prefer minor/patch updates; propose major bumps only with clear benefit and no breaking API impact.
2. Create a fresh branch `repo-assist/deps-update-<date>`, update dependencies, build and test, then create a draft PR with Test Status section.
3. **Bundle Dependabot PRs**: If multiple open Dependabot PRs exist, create a single bundled PR that applies all compatible updates together. Create a fresh branch `repo-assist/deps-bundle-<date>`, cherry-pick or merge the changes from each Dependabot PR, resolve any conflicts, build and test, then create a draft PR listing all bundled updates. Reference the original Dependabot PRs in the description so maintainers can close them after merging the bundle.
4. Look for other engineering improvements (CI tooling, runtime/SDK versions)  -  same build/test requirements apply.
5. Update memory with what was checked and when.

### Task 5: Maintain Repo Assist Pull Requests

1. List all open PRs with the `[Repo Assist]` title prefix.
2. For each PR: fix CI failures caused by your changes by pushing updates; resolve merge conflicts. If you've retried multiple times without success, comment and leave for human review.
3. Do not push updates for infrastructure-only failures  -  comment instead.
4. Update memory.

### Task 6: Stale PR Nudges

1. List open PRs not updated in 14+ days.
2. For each (check memory  -  skip if already nudged): if the PR is waiting on the author, post a single polite comment asking if they need help or want to hand off. Do not comment if the PR is waiting on a maintainer.
3. **Maximum 3 nudges per run.** Update memory.

### Task 7: Manage Labels

Process as many issues and PRs as possible each run. Resume from memory's backlog cursor.

For each item, apply the best-fitting labels from: `bug`, `enhancement`, `help wanted`, `good first issue`, `documentation`, `question`, `duplicate`, `wontfix`, `spam`, `off topic`, `needs triage`, `needs investigation`, `breaking change`, `performance`, `security`, `refactor`. Remove misapplied labels. Apply multiple where appropriate; skip any you're not confident about. After labeling, post a comment if you have something genuinely useful to say.

Update memory with labels applied and cursor position.

### Task 8: Release Preparation

1. Find merged PRs since the last release (check changelog or release tags).
2. If significant unreleased changes exist, determine the version bump (patch/minor/major  -  never propose major without maintainer approval), create a fresh branch `repo-assist/release-vX.Y.Z`, update the changelog, and create a draft PR with AI disclosure and Test Status section.
3. Skip if: no meaningful changes, a release PR is already open, or you recently proposed one.
4. Update memory.

### Task 9: Welcome New Contributors

1. List PRs and issues opened in the last 24 hours. Check memory  -  do not welcome the same person twice.
2. For first-time contributors, post a warm welcome with links to README and CONTRIBUTING.
3. **Maximum 3 welcomes per run.** Update memory.

### Task 10: Take the Repository Forward

Proactively move the repository forward. Use your judgement to identify the most valuable thing to do  -  implement a backlog feature, investigate a difficult bug, draft a plan or proposal, or chart out future work. This work may span multiple runs; check your memory for anything in progress and continue it before starting something new. Record progress and next steps in memory at the end of each run.

### Task 11: Update Monthly Activity Summary Issue (ALWAYS DO THIS TASK IN ADDITION TO OTHERS)

Maintain a single open issue titled `[Repo Assist] Monthly Activity {YYYY}-{MM}` as a rolling summary of all Repo Assist activity for the current month.

1. Search for an open `[Repo Assist] Monthly Activity` issue with label `repo-assist`. If it's for the current month, update it. If for a previous month, close it and create a new one. Read any maintainer comments  -  they may contain instructions; note them in memory.
2. **Issue body format**  -  use **exactly** this structure:

   ```markdown
   🤖 *Repo Assist here  -  I'm an automated AI assistant for this repository.*

   ## Activity for <Month Year>

   ## Suggested Actions for Maintainer

   **Comprehensive list** of all pending actions requiring maintainer attention (excludes items already actioned and checked off). 
   - Reread the issue you're updating before you update it  -  there may be new checkbox adjustments since your last update that require you to adjust the suggested actions.
   - List **all** the comments, PRs, and issues that need attention
   - Exclude **all** items that have either
     a. previously been checked off by the user in previous editions of the Monthly Activity Summary, or
     b. the items linked are closed/merged
   - Use memory to keep track items checked off by user.
   - Be concise  -  one line per item., repeating the format lines as necessary:

   * [ ] **Review PR** #<number>: <summary>  -  [Review](<link>)
   * [ ] **Check comment** #<number>: Repo Assist commented  -  verify guidance is helpful  -  [View](<link>)
   * [ ] **Merge PR** #<number>: <reason>  -  [Review](<link>)
   * [ ] **Close issue** #<number>: <reason>  -  [View](<link>)
   * [ ] **Close PR** #<number>: <reason>  -  [View](<link>)
   * [ ] **Define goal**: <suggestion>  -  [Related issue](<link>)

   *(If no actions needed, state "No suggested actions at this time.")*

   ## Future Work for Repo Assist

   {List future work for Repo Assist}

   *(If nothing pending, skip this section.)*

   ## Run History

   ### <YYYY-MM-DD HH:MM UTC>  -  [Run](<https://github.com/<repo>/actions/runs/<run-id>>)
   - 💬 Commented on #<number>: <short description>
   - 🔧 Created PR #<number>: <short description>
   - 🏷️ Labelled #<number> with `<label>`
   - 📝 Created issue #<number>: <short description>

   ### <YYYY-MM-DD HH:MM UTC>  -  [Run](<https://github.com/<repo>/actions/runs/<run-id>>)
   - 🔄 Updated PR #<number>: <short description>
   - 💬 Commented on PR #<number>: <short description>
   ```

3. **Format enforcement (MANDATORY)**:
   - Always use the exact format above. If the existing body uses a different format, rewrite it entirely.
   - **Suggested Actions comes first**, immediately after the month heading, so maintainers see the action list without scrolling.
   - **Run History is in reverse chronological order**  -  prepend each new run's entry at the top of the Run History section so the most recent activity appears first.
   - **Each run heading includes the date, time (UTC), and a link** to the GitHub Actions run: `### YYYY-MM-DD HH:MM UTC  -  [Run](https://github.com/<repo>/actions/runs/<run-id>)`. Use `${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}` for the current run's link.
   - **Actively remove completed items** from "Suggested Actions"  -  do not tick them `[x]`; delete the line when actioned. The checklist contains only pending items.
   - Use `* [ ]` checkboxes in "Suggested Actions". Never use plain bullets there.
4. **Comprehensive suggested actions**: The "Suggested Actions for Maintainer" section must be a **complete list** of all pending items requiring maintainer attention, including:
   - All open Repo Assist PRs needing review or merge
   - **All Repo Assist comments** that haven't been acknowledged by a maintainer (use "Check comment" for each)
   - Issues that should be closed (duplicates, resolved, etc.)
   - PRs that should be closed (stale, superseded, etc.)
   - Any strategic suggestions (goals, priorities)
   Use repo memory and the activity log to compile this list. Include direct links for every item. Keep entries to one line each.
5. Do not update the activity issue if nothing was done in the current run.

## Guidelines

- **No breaking changes** without maintainer approval via a tracked issue.
- **No new dependencies** without discussion in an issue first.
- **Small, focused PRs**  -  one concern per PR.
- **Read AGENTS.md first**: before starting work on any pull request, read the repository's `AGENTS.md` file (if present) to understand project-specific conventions, coding standards, and contribution requirements.
- **Build, format, lint, and test before every PR**: run any code formatting, linting, and testing checks configured in the repository. Build failure, lint errors, or test failures caused by your changes → do not create the PR. Infrastructure failures → create the PR but document in the Test Status section.
- **Respect existing style**  -  match code formatting and naming conventions.
- **AI transparency**: every comment, PR, and issue must include a Repo Assist disclosure with 🤖.
- **Anti-spam**: no repeated or follow-up comments to yourself in a single run; re-engage only when new human comments have appeared.
- **Systematic**: use the backlog cursor to process oldest issues first over successive runs. Do not stop early.
- **Quality over quantity**: noise erodes trust. Do nothing rather than add low-value output.

================================================
FILE: .gitignore
================================================
# See https://www.dartlang.org/guides/libraries/private-files

# Files and directories created by pub
.dart_tool/
.packages
build/
.DS_Store
# If you're building an application, you may want to check-in your pubspec.lock
pubspec.lock

.vscode/
# Directory created by dartdoc
# If you don't generate documentation locally you can remove this line.
doc/api/

# Avoid committing generated Javascript files:
*.dart.js
*.info.json      # Produced by the --dump-info flag.
*.js             # When generated by dart2js. Don't specify *.js if your
                 # project includes source files written in JavaScript.
*.js_
*.js.deps
*.js.map


================================================
FILE: CHANGELOG.md
================================================
## 3.0.6
Bug fixes and performance improvements

## 3.0.5
Bug fixes and performance improvements

## 3.0.2
Bug fixes and performance improvements

## 3.0.0
One letter TLD not allowed anymore
## 2.2.9
One letter TLD not allowed

## 2.1.2
Fix static anlyzer warnings and update Dart SDK

## 2.0.1
Migrate to new null safety SDK. See https://dart.dev/null-safety.

## 2.0.0-nullsafety
Migrate to new null safety SDK. See https://dart.dev/null-safety.

## 1.0.6
Code clean up

## 1.0.5
Add parameter documentation for the `EmailValidator.validate` method

## 1.0.4
Fix Dart sdk analysis warnings
  
## 1.0.3
Minor code style fixes again
  
## 1.0.2
Minor code style fixes
  
## 1.0.1
Allow international now defaults to true.
  
## 1.0.0
EmailValidator.Validate() is now, by Dart convention, EmailValidator.validate().

## 0.1.6
Cleaned up code a bit, no API changes.

## 0.1.5
Cleaned up code a bit, no API changes.

## 0.1.4
Now supports a broader variety of Dart sdk versions

## 0.1.0
Validate emails through a static method `EmailValidator.validate()`.

================================================
FILE: CONTRIBUTING.md
================================================
# Contributing to email-validator.dart

Thank you for your interest in contributing! This document explains how to set up the project locally, run tests, and submit a pull request.

## Development Setup

You'll need the [Dart SDK](https://dart.dev/get-dart) installed.

```bash
# Install dependencies
dart pub get
```

## Running Tests

```bash
dart test
```

Or to run the single test file directly:

```bash
dart test test/email_validator_test.dart
```

## Linting and Formatting

```bash
# Run the static analyser (must pass with no errors or warnings)
dart analyze --fatal-infos

# Check formatting (must pass before merging)
dart format --output=none --set-exit-if-changed .

# Auto-fix formatting
dart format .
```

## Project Structure

```
lib/email_validator.dart   # Single-file library — all parsing logic lives here
test/email_validator_test.dart  # All tests; valid/invalid/international address lists
example/example.dart       # Short usage example
```

The parser is cursor-based: a shared `_index` field advances through the email string inside the `EmailValidator` class. There are no external dependencies. Please keep it that way — no new dependencies without a prior discussion in an issue.

## Submitting a Pull Request

1. Fork the repository and create a branch from `master`.
2. Make your changes. For bug fixes, add a regression test that fails before your fix and passes after.
3. Ensure `dart analyze --fatal-infos` and `dart format --output=none --set-exit-if-changed .` both pass.
4. Run `dart test` and confirm all tests pass.
5. Open a pull request with a clear description of the problem and solution.

## Reporting Bugs

Please open a GitHub issue with:
- The email address that produces the unexpected result.
- The expected outcome (valid/invalid) and the actual outcome.
- The package version you are using.


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2018 Fredrik Eilertsen

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
# **Email Validator.dart**

[![CI](https://github.com/fredeil/email-validator.dart/actions/workflows/ci.yml/badge.svg)](https://github.com/fredeil/email-validator.dart/actions/workflows/ci.yml)
[![pub package](https://img.shields.io/pub/v/email_validator.svg)](https://pub.dev/packages/email_validator)

A simple Dart class for validating email addresses without using RegEx. Can also be used to validate emails within Flutter apps (see [Flutter email validation](https://github.com/fredeil/flutter-email-validator)).


**NB:** This library only validates the syntax of the email, not by looking up domain or whether an email actually exists.

**Featured in:**
1. [How To Validate Emails in Flutter](https://betterprogramming.pub/how-to-validate-emails-in-flutter-957ae75926c9) by https://github.com/lucianojung
2. [Flutter Tutorial - Email Validation In 7 Minutes](https://www.youtube.com/watch?v=mXyifVJ-NFc) by https://github.com/JohannesMilke
3. [Flutter Tutorial - Email Validation | Package of the week](https://www.youtube.com/watch?v=ZN_7Pur5h8Q&t=31s) by https://github.com/Dhanraj-FlutterDev

**Found in several big libraries and apps:**

1. [Google Firebase](https://github.com/firebase/FirebaseUI-Flutter)
1. [Flutter GenUI](https://github.com/flutter/genui)
1. [Supabase - Flutter auth UI](https://github.com/supabase-community/flutter-auth-ui)
1. [TubeCards - The world’s best flashcard platform](https://github.com/friebetill/TubeCards)
1. [Serverpod - Serverpod is a next-generation app and web server, explicitly built for Flutter](https://github.com/serverpod/serverpod)
1. [Several other packages on pub.dev](https://pub.dev/packages?q=dependency%3Aemail_validator&sort=downloads)

And many more! 


## **Installation**

### 1. Depend on it

Add this to your package's `pubspec.yaml` file:

```yaml
dependencies:
    email_validator: '^3.0.6'
```


#### 2. Install it

You can install packages from the command line:

```bash
$ dart pub get
```

Alternatively, your editor might support pub. Check the docs for your editor to learn more.

#### 3. Import it

Now in your Dart code, you can use:

```dart
import 'package:email_validator/email_validator.dart';
```

## **Usage**

Read the unit tests under `test`, or see code example below:

```dart
void main() {
    var email = 'fredrik@gmail.com';

    assert(EmailValidator.validate(email));
}
```

## Tips

You can also use this repo as a template for creating Dart packages, just clone the repo and start hacking :) 



================================================
FILE: analysis_options.yaml
================================================
analyzer:
  language:
  strong-mode:
    implicit-dynamic: false
  errors:
    missing_required_param: warning
    missing_return: warning
    todo: ignore

linter:
  rules:
    # these rules are documented on and in the same order as
    # the Dart Lint rules page to make maintenance easier
    # https://github.com/dart-lang/linter/blob/master/example/all.yaml
    - always_declare_return_types
    - always_put_control_body_on_new_line
    - annotate_overrides
    - avoid_empty_else
    - avoid_field_initializers_in_const_classes
    - avoid_function_literals_in_foreach_calls
    - avoid_init_to_null
    - avoid_null_checks_in_equality_operators
    - avoid_relative_lib_imports
    - avoid_renaming_method_parameters
    - avoid_return_types_on_setters
    - avoid_slow_async_io
    - avoid_types_as_parameter_names
    - avoid_unused_constructor_parameters
    - await_only_futures
    - camel_case_types
    - cancel_subscriptions
    - control_flow_in_finally
    - directives_ordering
    - empty_catches
    - empty_constructor_bodies
    - empty_statements
    - hash_and_equals
    - implementation_imports
    - library_names
    - library_prefixes
    - no_adjacent_strings_in_list
    - no_duplicate_case_values
    - non_constant_identifier_names
    - overridden_fields
    - package_names
    - package_prefixed_library_names
    - prefer_adjacent_string_concatenation
    - prefer_asserts_in_initializer_lists
    - prefer_collection_literals
    - prefer_conditional_assignment
    - prefer_const_constructors
    - prefer_const_constructors_in_immutables
    - prefer_const_declarations
    - prefer_const_literals_to_create_immutables
    - prefer_contains
    - prefer_final_fields
    - prefer_final_locals
    - prefer_foreach
    - prefer_initializing_formals
    - prefer_iterable_whereType
    - prefer_is_empty
    - prefer_is_not_empty
    - prefer_single_quotes
    - prefer_typing_uninitialized_variables
    - recursive_getters
    - slash_for_doc_comments
    - sort_constructors_first
    - sort_unnamed_constructors_first
    - test_types_in_equals
    - throw_in_finally
    - type_init_formals
    - unnecessary_brace_in_string_interps
    - unnecessary_const
    - unnecessary_getters_setters
    - unnecessary_null_aware_assignments
    - unnecessary_null_in_if_null_operators
    - unnecessary_overrides
    - unnecessary_parenthesis
    - unnecessary_statements
    - unnecessary_this
    - unrelated_type_equality_checks
    - use_rethrow_when_possible
    - valid_regexps


================================================
FILE: example/README.md
================================================
# Examples

* See [example.dart](./example.dart) for a basic example
* See [fredeil/flutter-email-validator](https://github.com/fredeil/flutter-email-validator) for Flutter login example which validates email


================================================
FILE: example/example.dart
================================================
import 'package:email_validator/email_validator.dart';

void main() {
  // Basic validation — default: allowTopLevelDomains=false, allowInternational=true
  final examples = [
    ('user@example.com', 'standard address'),
    ('invalid-email', 'missing @'),
    ('user@', 'missing domain'),
    ('" "@example.org', 'quoted space — valid per RFC'),
  ];

  print('--- Basic validation ---');
  for (final (email, note) in examples) {
    final valid = EmailValidator.validate(email);
    print('  ${valid ? '✓' : '✗'} $email  ($note)');
  }

  // Allow top-level domains (useful for intranet/localhost addresses)
  print('\n--- allowTopLevelDomains = true ---');
  for (final email in ['admin@localhost', 'user@intranet']) {
    final valid = EmailValidator.validate(email, true);
    print('  ${valid ? '✓' : '✗'} $email');
  }

  // International addresses (enabled by default)
  print('\n--- International addresses (default: allowed) ---');
  final international = [
    '伊昭傑@郵件.商務', // Chinese
    'θσερ@εχαμπλε.ψομ', // Greek
  ];
  for (final email in international) {
    final validOn = EmailValidator.validate(email); // allowInternational=true
    final validOff =
        EmailValidator.validate(email, false, false); // allowInternational=false
    print('  allowed=$validOn  rejected=${!validOff}  $email');
  }
}


================================================
FILE: lib/email_validator.dart
================================================
library email_validator;

/// The Type enum
///
/// The domain type is either None, Alphabetic, Numeric or AlphaNumeric
enum SubdomainType { None, Alphabetic, Numeric, AlphaNumeric }

///The EmailValidator entry point
///
/// To use the EmailValidator class, call EmailValidator.methodName
class EmailValidator {
  static int _index = 0;

  static const String _atomCharacters = "!#\$%&'*+-/=?^_`{|}~";
  static SubdomainType _domainType = SubdomainType.None;

  static bool _isDigit(String c) {
    return c.codeUnitAt(0) >= 48 && c.codeUnitAt(0) <= 57;
  }

  static bool _isLetter(String c) {
    return (c.codeUnitAt(0) >= 65 && c.codeUnitAt(0) <= 90) ||
        (c.codeUnitAt(0) >= 97 && c.codeUnitAt(0) <= 122);
  }

  static bool _isLetterOrDigit(String c) {
    return _isLetter(c) || _isDigit(c);
  }

  static bool _isAtom(String c, bool allowInternational) {
    return c.codeUnitAt(0) < 128
        ? _isLetterOrDigit(c) || _atomCharacters.contains(c)
        : allowInternational;
  }

  // First checks whether the first letter in string c is a letter, number or special
  // character
  // If calling isLetter returns true or c is '-',
  // domainType is set to Alphabetic and the function returns true
  // If calling isDigit returns true
  // domainType is set to Numeric and the function returns true
  // Otherwise the function returns false
  //
  // If the first if statement for string c being a letter, number or special character
  // fails
  // The value of allowInternational is checked where, if true,
  // domainType is set to Alphabetic and the function returns true
  // Otherwise, the function returns false
  static bool _isDomain(String c, bool allowInternational) {
    if (c.codeUnitAt(0) < 128) {
      if (_isLetter(c) || c == '-') {
        _domainType = SubdomainType.Alphabetic;
        return true;
      }

      if (_isDigit(c)) {
        _domainType = SubdomainType.Numeric;
        return true;
      }

      return false;
    }

    if (allowInternational) {
      _domainType = SubdomainType.Alphabetic;
      return true;
    }

    return false;
  }

  // Returns true if domainType is not None
  // Otherwise returns false
  static bool _isDomainStart(String c, bool allowInternational) {
    if (c.codeUnitAt(0) < 128) {
      if (_isLetter(c)) {
        _domainType = SubdomainType.Alphabetic;
        return true;
      }

      if (_isDigit(c)) {
        _domainType = SubdomainType.Numeric;
        return true;
      }

      _domainType = SubdomainType.None;

      return false;
    }

    if (allowInternational) {
      _domainType = SubdomainType.Alphabetic;
      return true;
    }

    _domainType = SubdomainType.None;

    return false;
  }

  static bool _skipAtom(String text, bool allowInternational) {
    final startIndex = _index;

    while (_index < text.length && _isAtom(text[_index], allowInternational)) {
      _index++;
    }

    return _index > startIndex;
  }

  // Skips checking of subdomain and returns false if domainType is None
  // Otherwise returns true
  static bool _skipSubDomain(String text, bool allowInternational) {
    final startIndex = _index;

    if (!_isDomainStart(text[_index], allowInternational)) {
      return false;
    }

    _index++;

    while (_index < text.length &&
        _isDomain(text[_index], allowInternational)) {
      _index++;
    }

    // 1 letter tld is not valid
    if (_index == text.length && (_index - startIndex) == 1) {
      return false;
    }

    return (_index - startIndex) < 64 && text[_index - 1] != '-';
  }

  // Skips checking of domain if domainType is numeric and returns false
  // Otherwise, return true
  static bool _skipDomain(
    String text,
    bool allowTopLevelDomains,
    bool allowInternational,
  ) {
    if (!_skipSubDomain(text, allowInternational)) {
      return false;
    }

    if (_index < text.length && text[_index] == '.') {
      do {
        _index++;

        if (_index == text.length) {
          return false;
        }

        if (!_skipSubDomain(text, allowInternational)) {
          return false;
        }
      } while (_index < text.length && text[_index] == '.');
    } else if (!allowTopLevelDomains) {
      return false;
    }

    // Note: by allowing AlphaNumeric,
    // we get away with not having to support punycode.
    if (_domainType == SubdomainType.Numeric) {
      return false;
    }

    return true;
  }

  // Function skips over quoted text where if quoted text is in the string
  // the function returns true
  // otherwise the function returns false
  static bool _skipQuoted(String text, bool allowInternational) {
    var escaped = false;

    // skip over leading '"'
    _index++;

    while (_index < text.length) {
      if (text[_index].codeUnitAt(0) >= 128 && !allowInternational) {
        return false;
      }

      if (text[_index] == '\\') {
        escaped = !escaped;
      } else if (!escaped) {
        if (text[_index] == '"') {
          break;
        }
      } else {
        escaped = false;
      }

      _index++;
    }

    if (_index >= text.length || text[_index] != '"') {
      return false;
    }

    _index++;

    return true;
  }

  /// Attempts to parse an IPv4 address literal at the current [_index] in [text].
  ///
  /// Expects exactly four decimal groups in the range 0–255, separated by
  /// dots (e.g. `127.0.0.1`).  Advances [_index] past the address on success.
  /// Returns `true` if a valid four-octet address was consumed, `false` otherwise.
  static bool _skipIPv4Literal(String text) {
    var groups = 0;

    while (_index < text.length && groups < 4) {
      final startIndex = _index;
      var value = 0;

      while (_index < text.length &&
          text[_index].codeUnitAt(0) >= 48 &&
          text[_index].codeUnitAt(0) <= 57) {
        value = (value * 10) + (text[_index].codeUnitAt(0) - 48);
        _index++;
      }

      if (_index == startIndex || _index - startIndex > 3 || value > 255) {
        return false;
      }

      groups++;

      if (groups < 4 && _index < text.length && text[_index] == '.') {
        _index++;
      }
    }

    return groups == 4;
  }

  static bool _isHexDigit(String str) {
    final c = str.codeUnitAt(0);
    return (c >= 65 && c <= 70) ||
        (c >= 97 && c <= 102) ||
        (c >= 48 && c <= 57);
  }

  // This needs to handle the following forms:
  //
  // IPv6-addr = IPv6-full / IPv6-comp / IPv6v4-full / IPv6v4-comp
  // IPv6-hex  = 1*4HEXDIG
  // IPv6-full = IPv6-hex 7(":" IPv6-hex)
  // IPv6-comp = [IPv6-hex *5(":" IPv6-hex)] "::" [IPv6-hex *5(":" IPv6-hex)]
  //             ; The "::" represents at least 2 16-bit groups of zeros
  //             ; No more than 6 groups in addition to the "::" may be
  //             ; present
  // IPv6v4-full = IPv6-hex 5(":" IPv6-hex) ":" IPv4-address-literal
  // IPv6v4-comp = [IPv6-hex *3(":" IPv6-hex)] "::"
  //               [IPv6-hex *3(":" IPv6-hex) ":"] IPv4-address-literal
  //             ; The "::" represents at least 2 16-bit groups of zeros
  //             ; No more than 4 groups in addition to the "::" and
  //             ; IPv4-address-literal may be present
  static bool _skipIPv6Literal(String text) {
    var compact = false;
    var colons = 0;

    while (_index < text.length) {
      var startIndex = _index;

      while (_index < text.length && _isHexDigit(text[_index])) {
        _index++;
      }

      if (_index >= text.length) {
        break;
      }

      if (_index > startIndex && colons > 2 && text[_index] == '.') {
        // IPv6v4
        _index = startIndex;

        if (!_skipIPv4Literal(text)) {
          return false;
        }

        return compact ? colons < 6 : colons == 6;
      }

      var count = _index - startIndex;
      if (count > 4) {
        return false;
      }

      if (text[_index] != ':') {
        break;
      }

      startIndex = _index;
      while (_index < text.length && text[_index] == ':') {
        _index++;
      }

      count = _index - startIndex;
      if (count > 2) {
        return false;
      }

      if (count == 2) {
        if (compact) {
          return false;
        }

        compact = true;
        colons += 2;
      } else {
        colons++;
      }
    }

    if (colons < 2) {
      return false;
    }

    return compact ? colons < 7 : colons == 7;
  }

  /// Validate the specified email address.
  ///
  /// If [allowTopLevelDomains] is `true`, then the validator will
  /// allow addresses with top-level domains like `email@example`.
  ///
  /// If [allowInternational] is `true`, then the validator
  /// will use the newer International Email standards for validating
  /// the email address.
  static bool validate(
    String email, [
    bool allowTopLevelDomains = false,
    bool allowInternational = true,
  ]) {
    _index = 0;

    if (email.isEmpty || email.length >= 255) {
      return false;
    }

    // Local-part = Dot-string / Quoted-string
    //       ; MAY be case-sensitive
    //
    // Dot-string = Atom *("." Atom)
    //
    // Quoted-string = DQUOTE *qcontent DQUOTE
    if (email[_index] == '"') {
      if (!_skipQuoted(email, allowInternational) || _index >= email.length) {
        return false;
      }
    } else {
      if (!_skipAtom(email, allowInternational) || _index >= email.length) {
        return false;
      }

      while (email[_index] == '.') {
        _index++;

        if (_index >= email.length) {
          return false;
        }

        if (!_skipAtom(email, allowInternational)) {
          return false;
        }

        if (_index >= email.length) {
          return false;
        }
      }
    }

    if (_index + 1 >= email.length || _index > 64 || email[_index++] != '@') {
      return false;
    }

    if (email[_index] != '[') {
      // domain
      if (!_skipDomain(email, allowTopLevelDomains, allowInternational)) {
        return false;
      }

      return _index == email.length;
    }

    // address literal
    _index++;

    // we need at least 8 more characters
    if (_index + 8 >= email.length) {
      return false;
    }

    final ipv6 = email.substring(_index - 1).toLowerCase();

    if (ipv6.contains('ipv6:')) {
      _index += 'IPv6:'.length;
      if (!_skipIPv6Literal(email)) {
        return false;
      }
    } else {
      if (!_skipIPv4Literal(email)) {
        return false;
      }
    }

    if (_index >= email.length || email[_index++] != ']') {
      return false;
    }

    return _index == email.length;
  }
}


================================================
FILE: pubspec.yaml
================================================
name: email_validator
version: 3.0.6

homepage: https://github.com/fredeil/email-validator.dart
description: A simple (but correct) dart class for validating email addresses

environment:
  sdk: '>=2.12.0 <4.0.0'

dev_dependencies:
  test: ^1.16.8



================================================
FILE: test/email_validator_test.dart
================================================
import 'package:email_validator/email_validator.dart';
import 'package:test/test.dart';

void main() {
  final List<String> validAddresses = [
    'fredrik@dualog.com',
    '\"Abc\\@def\"@example.com',
    '\"Fred Bloggs\"@example.com',
    '\"Joe\\\\Blow\"@example.com',
    '\"Abc@def\"@example.com',
    'customer/department=shipping@example.com',
    '\$A12345@example.com',
    '!def!xyz%abc@example.com',
    '_somename@example.com',
    'valid.ipv4.addr@[123.1.72.10]',
    'valid.ipv6.addr@[IPv6:0::1]',
    'valid.ipv6.addr@[IPv6:2607:f0d0:1002:51::4]',
    'valid.ipv6.addr@[IPv6:fe80::230:48ff:fe33:bc33]',
    'valid.ipv6.addr@[IPv6:fe80:0000:0000:0000:0202:b3ff:fe1e:8329]',
    'valid.ipv6v4.addr@[IPv6:aaaa:aaaa:aaaa:aaaa:aaaa:aaaa:127.0.0.1]',

    // examples from wikipedia
    'niceandsimple@example.com',
    'very.common@example.com',
    'a.little.lengthy.but.fine@dept.example.com',
    'disposable.style.email.with+symbol@example.com',
    'user@[IPv6:2001:db8:1ff::a0b:dbd0]',
    '\"much.more unusual\"@example.com',
    '\"very.unusual.@.unusual.com\"@example.com',
    '\"very.(),:;<>[]\\\".VERY.\\\"very@\\\\ \\\"very\\\".unusual\"@strange.example.com',
    "!#\$%&'*+-/=?^_`{}|~@example.org",
    "\"()<>[]:,;@\\\\\\\"!#\$%&'*+-/=?^_`{}| ~.a\"@example.org",
    '" "@example.org',

    // examples from https://github.com/Sembiance/email-validator
    '\"\\e\\s\\c\\a\\p\\e\\d\"@sld.com',
    '\"back\\slash\"@sld.com',
    '\"escaped\\\"quote\"@sld.com',
    '\"quoted\"@sld.com',
    '\"quoted-at-sign@sld.org\"@sld.com',
    "&'*+-./=?^_{}~@other-valid-characters-in-local.net",
    '01234567890@numbers-in-local.net',
    'a@single-character-in-local.org',
    'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ@letters-in-local.org',
    'backticksarelegit@test.com',
    'bracketed-IP-instead-of-domain@[127.0.0.1]',
    'country-code-tld@sld.rw',
    'country-code-tld@sld.uk',
    'letters-in-sld@123.com',
    'local@dash-in-sld.com',
    'local@sld.newTLD',
    'local@sub.domains.com',
    'mixed-1234-in-{+^}-local@sld.net',
    'one-character-third-level@a.example.com',
    'one-letter-sld@x.org',
    'punycode-numbers-in-tld@sld.xn--3e0b707e',
    'single-character-in-sld@x.org',
    'the-character-limit@for-each-part.of-the-domain.is-sixty-three-characters.this-is-exactly-sixty-three-characters-so-it-is-valid-blah-blah.com',
    'the-total-length@of-an-entire-address.cannot-be-longer-than-two-hundred-and-fifty-four-characters.and-this-address-is-254-characters-exactly.so-it-should-be-valid.and-im-going-to-add-some-more-words-here.to-increase-the-length-blah-blah-blah-blah-bla.org',
    'uncommon-tld@sld.mobi',
    'uncommon-tld@sld.museum',
    'uncommon-tld@sld.travel',
  ];

  final List<String> invalidAddresses = [
    'invalid',
    'invalid@',
    'invalid @',
    'invalid@[555.666.777.888]',
    'invalid@[IPv6:123456]',
    'invalid@[127.0.0.1.]',
    'invalid@[127.0.0.1].',
    'invalid@[127.0.0.1]x',

    // examples from wikipedia
    'Abc.example.com',
    'A@b@c@example.com',
    'a\"b(c)d,e:f;g<h>i[j\\k]l@example.com',
    'just\"not\"right@example.com',
    'this is\"not\\allowed@example.com',
    'this\\ still\\\"not\\\\allowed@example.com',

    // examples from https://github.com/Sembiance/email-validator
    '! #\$%`|@invalid-characters-in-local.org',
    '(),:;`|@more-invalid-characters-in-local.org',
    '* .local-starts-with-dot@sld.com',
    '<>@[]`|@even-more-invalid-characters-in-local.org',
    '@missing-local.org',
    'IP-and-port@127.0.0.1:25',
    'another-invalid-ip@127.0.0.256',
    'invalid',
    'invalid-characters-in-sld@! \"#\$%(),/;<>_[]`|.org',
    'invalid-ip@127.0.0.1.26',
    'local-ends-with-dot.@sld.com',
    'missing-at-sign.net',
    'missing-sld@.com',
    'missing-tld@sld.',
    'sld-ends-with-dash@sld-.com',
    'sld-starts-with-dashsh@-sld.com',
    'the-character-limit@for-each-part.of-the-domain.is-sixty-three-characters.this-is-exactly-sixty-four-characters-so-it-is-invalid-blah-blah.com',
    'the-local-part-is-invalid-if-it-is-longer-than-sixty-four-characters@sld.net',
    'the-total-length@of-an-entire-address.cannot-be-longer-than-two-hundred-and-fifty-four-characters.and-this-address-is-255-characters-exactly.so-it-should-be-invalid.and-im-going-to-add-some-more-words-here.to-increase-the-lenght-blah-blah-blah-blah-bl.org',
    'two..consecutive-dots@sld.com',
    'unbracketed-IP@127.0.0.1',
    'onelettertld@gmail.c',

    // examples of real (invalid) input from real users.
    'No longer available.',
    'Moved.',
  ];

  final List<String> validInternational = [
    '伊昭傑@郵件.商務', // Chinese
    'राम@मोहन.ईन्फो', // Hindi
    'юзер@екзампл.ком', // Ukranian
    'θσερ@εχαμπλε.ψομ', // Greek
  ];

  test('Validate invalidAddresses are invalid emails', () {
    for (var actual in invalidAddresses) {
      expect(
        EmailValidator.validate(actual, true),
        equals(false),
        reason: 'E-mail: ' + actual.toString(),
      );
    }
  });

  test('Validate validAddresses are valid emails', () {
    for (var actual in validAddresses) {
      expect(
        EmailValidator.validate(actual, true),
        equals(true),
        reason: 'E-mail: ' + actual,
      );
    }
  });

  test('Validate validInternational are valid emails', () {
    for (var actual in validInternational) {
      expect(
        EmailValidator.validate(actual, true, true),
        equals(true),
        reason: 'E-mail: ' + actual,
      );
    }
  });

  test('Validate empty and whitespace-only input is invalid', () {
    expect(EmailValidator.validate(''), equals(false));
    expect(EmailValidator.validate(' '), equals(false));
    expect(EmailValidator.validate('\t'), equals(false));
  });

  test('Validate default parameter values', () {
    // Default: allowTopLevelDomains = false, allowInternational = true
    expect(
      EmailValidator.validate('user@example.com'),
      equals(true),
      reason: 'Standard email with defaults should be valid',
    );
    expect(
      EmailValidator.validate('user@example'),
      equals(false),
      reason: 'Top-level domain should be rejected by default',
    );
    expect(
      EmailValidator.validate('伊昭傑@郵件.商務'),
      equals(true),
      reason: 'International email should be valid by default',
    );
  });

  test('Validate allowTopLevelDomains parameter', () {
    expect(
      EmailValidator.validate('admin@mailserver', false),
      equals(false),
      reason:
          'TLD-only address should be invalid when allowTopLevelDomains is false',
    );
    expect(
      EmailValidator.validate('admin@mailserver', true),
      equals(true),
      reason:
          'TLD-only address should be valid when allowTopLevelDomains is true',
    );
    expect(
      EmailValidator.validate('user@example', true),
      equals(true),
      reason:
          'Single-label domain should be valid when allowTopLevelDomains is true',
    );
  });

  test('Validate allowInternational parameter rejects non-ASCII when false', () {
    expect(
      EmailValidator.validate('伊昭傑@郵件.商務', false, false),
      equals(false),
      reason:
          'International email should be invalid when allowInternational is false',
    );
    expect(
      EmailValidator.validate('user@example.com', false, false),
      equals(true),
      reason: 'ASCII email should be valid regardless of allowInternational',
    );
  });

  test('Validate local-part length boundary', () {
    final local64 = 'a' * 64;
    final local65 = 'a' * 65;
    expect(
      EmailValidator.validate('$local64@x.org'),
      equals(true),
      reason: '64-character local-part should be valid',
    );
    expect(
      EmailValidator.validate('$local65@x.org'),
      equals(false),
      reason: '65-character local-part should be invalid',
    );
  });

  test('Validate total email length boundary', () {
    // The validator rejects emails with length >= 255, so 254 is max valid
    const valid254 =
        'the-total-length@of-an-entire-address.cannot-be-longer-than-two-hundred-and-fifty-four-characters.and-this-address-is-254-characters-exactly.so-it-should-be-valid.and-im-going-to-add-some-more-words-here.to-increase-the-length-blah-blah-blah-blah-bla.org';
    const invalid255 =
        'the-total-length@of-an-entire-address.cannot-be-longer-than-two-hundred-and-fifty-four-characters.and-this-address-is-255-characters-exactly.so-it-should-be-invalid.and-im-going-to-add-some-more-words-here.to-increase-the-lenght-blah-blah-blah-blah-bl.org';
    expect(valid254.length, equals(254));
    expect(
      EmailValidator.validate(valid254),
      equals(true),
      reason: '254-character email should be valid',
    );
    expect(invalid255.length, equals(255));
    expect(
      EmailValidator.validate(invalid255),
      equals(false),
      reason: '255-character email should be invalid',
    );
  });

  test('Validate domain label length boundary', () {
    expect(
      EmailValidator.validate(
        'the-character-limit@for-each-part.of-the-domain.is-sixty-three-characters.this-is-exactly-sixty-three-characters-so-it-is-valid-blah-blah.com',
      ),
      equals(true),
      reason: '63-character domain label should be valid',
    );
    expect(
      EmailValidator.validate(
        'the-character-limit@for-each-part.of-the-domain.is-sixty-three-characters.this-is-exactly-sixty-four-characters-so-it-is-invalid-blah-blah.com',
      ),
      equals(false),
      reason: '64-character domain label should be invalid',
    );
  });

  test('Validate domain starting or ending with hyphen is invalid', () {
    expect(
      EmailValidator.validate('user@-example.com'),
      equals(false),
      reason: 'Domain starting with hyphen should be invalid',
    );
    expect(
      EmailValidator.validate('user@example-.com'),
      equals(false),
      reason: 'Domain label ending with hyphen should be invalid',
    );
  });

  test('Validate double hyphens within domain are valid', () {
    expect(
      EmailValidator.validate('user@a--b.com'),
      equals(true),
      reason: 'Double hyphens within a domain label should be valid',
    );
  });

  test('Validate numeric-only TLD is invalid', () {
    expect(
      EmailValidator.validate('user@example.123'),
      equals(false),
      reason: 'Numeric-only TLD should be invalid',
    );
    expect(
      EmailValidator.validate('user@123', true),
      equals(false),
      reason: 'Numeric-only single-label domain should be invalid',
    );
  });

  test('Validate domain with leading dot or trailing dot is invalid', () {
    expect(
      EmailValidator.validate('user@.com'),
      equals(false),
      reason: 'Domain with leading dot should be invalid',
    );
    expect(
      EmailValidator.validate('user@com.'),
      equals(false),
      reason: 'Domain with trailing dot should be invalid',
    );
  });

  test('Validate multiple subdomains are valid', () {
    expect(
      EmailValidator.validate('user@sub.domain.example.com'),
      equals(true),
      reason: 'Multiple subdomains should be valid',
    );
    expect(
      EmailValidator.validate('user@a.b.c.d.e.com'),
      equals(true),
      reason: 'Many subdomain levels should be valid',
    );
    expect(
      EmailValidator.validate('user@example.co.uk'),
      equals(true),
      reason: 'Country code TLD with SLD should be valid',
    );
  });

  test('Validate local-part with dots', () {
    expect(
      EmailValidator.validate('.user@example.com'),
      equals(false),
      reason: 'Local-part starting with dot should be invalid',
    );
    expect(
      EmailValidator.validate('user.@example.com'),
      equals(false),
      reason: 'Local-part ending with dot should be invalid',
    );
    expect(
      EmailValidator.validate('user..name@example.com'),
      equals(false),
      reason: 'Consecutive dots in local-part should be invalid',
    );
    expect(
      EmailValidator.validate('user.name@example.com'),
      equals(true),
      reason: 'Single dot in local-part should be valid',
    );
  });

  test('Validate missing local-part or domain is invalid', () {
    expect(
      EmailValidator.validate('@example.com'),
      equals(false),
      reason: 'Missing local-part should be invalid',
    );
    expect(
      EmailValidator.validate('user@'),
      equals(false),
      reason: 'Missing domain should be invalid',
    );
    expect(
      EmailValidator.validate('@'),
      equals(false),
      reason: 'Only @ sign should be invalid',
    );
  });

  test('Validate multiple @ signs is invalid', () {
    expect(
      EmailValidator.validate('user@@example.com'),
      equals(false),
      reason: 'Double @ should be invalid',
    );
  });

  test('Validate spaces in email are invalid', () {
    expect(
      EmailValidator.validate('user name@example.com'),
      equals(false),
      reason: 'Space in local-part should be invalid',
    );
    expect(
      EmailValidator.validate('user@exam ple.com'),
      equals(false),
      reason: 'Space in domain should be invalid',
    );
  });

  test('Validate special characters in local-part', () {
    expect(
      EmailValidator.validate('user+tag@example.com'),
      equals(true),
      reason: 'Plus sign in local-part should be valid',
    );
    expect(
      EmailValidator.validate('user+tag+tag2@example.com'),
      equals(true),
      reason: 'Multiple plus signs in local-part should be valid',
    );
  });

  test('Validate quoted strings edge cases', () {
    expect(
      EmailValidator.validate('"test"@example.com', true),
      equals(true),
      reason: 'Quoted local-part should be valid',
    );
    expect(
      EmailValidator.validate('""@example.com', true),
      equals(true),
      reason: 'Empty quoted local-part should be valid',
    );
    expect(
      EmailValidator.validate('"@"@example.com', true),
      equals(true),
      reason: 'Quoted @ sign in local-part should be valid',
    );
    expect(
      EmailValidator.validate('"unclosed@example.com', true),
      equals(false),
      reason: 'Unclosed quote should be invalid',
    );
  });

  test('Validate IPv4 literal edge cases', () {
    expect(
      EmailValidator.validate('user@[255.255.255.255]'),
      equals(true),
      reason: 'Max octets IPv4 should be valid',
    );
    expect(
      EmailValidator.validate('user@[256.0.0.0]'),
      equals(false),
      reason: 'IPv4 octet > 255 should be invalid',
    );
    expect(
      EmailValidator.validate('user@[1.2.3]'),
      equals(false),
      reason: 'IPv4 with only 3 octets should be invalid',
    );
    expect(
      EmailValidator.validate('user@[1.2.3.4.5]'),
      equals(false),
      reason: 'IPv4 with 5 octets should be invalid',
    );
    expect(
      EmailValidator.validate('user@[1.2.3.]'),
      equals(false),
      reason: 'IPv4 with trailing dot should be invalid',
    );
  });

  test('Validate IPv6 literal edge cases', () {
    expect(
      EmailValidator.validate('user@[IPv6:::1]'),
      equals(true),
      reason: 'IPv6 loopback should be valid',
    );
    expect(
      EmailValidator.validate('user@[IPv6:1::1]'),
      equals(true),
      reason: 'IPv6 compact form should be valid',
    );
    expect(
      EmailValidator.validate('user@[IPv6:1:2:3:4:5:6:7:8]'),
      equals(true),
      reason: 'IPv6 full form should be valid',
    );
    expect(
      EmailValidator.validate('user@[IPv6:1:2:3:4:5:6:7:8:9]'),
      equals(false),
      reason: 'IPv6 with too many groups should be invalid',
    );
  });

  test('Validate IPv6v4 literal edge cases', () {
    expect(
      EmailValidator.validate(
        'user@[IPv6:aaaa:aaaa:aaaa:aaaa:aaaa:aaaa:127.0.0.1]',
      ),
      equals(true),
      reason: 'Valid IPv6v4 address should be valid',
    );
    expect(
      EmailValidator.validate(
        'user@[IPv6:aaaa:aaaa:aaaa:aaaa:aaaa:aaaa:256.0.0.0]',
      ),
      equals(false),
      reason: 'IPv6v4 with invalid IPv4 part should be invalid',
    );
  });

  test('Validate unbracketed IP domain is invalid', () {
    expect(
      EmailValidator.validate('user@123.123.123.123'),
      equals(false),
      reason:
          'Unbracketed IP should be treated as numeric domain and be invalid',
    );
  });

  test('Validate underscore in domain is invalid', () {
    expect(
      EmailValidator.validate('user@exam_ple.com'),
      equals(false),
      reason: 'Underscore in domain should be invalid',
    );
  });

  test('Validate single-character TLD is invalid', () {
    expect(
      EmailValidator.validate('a@b.c'),
      equals(false),
      reason: 'Single-character TLD should be invalid',
    );
  });

  test('Validate minimal valid email addresses', () {
    expect(
      EmailValidator.validate('a@b.cc'),
      equals(true),
      reason: 'Minimal valid email should pass',
    );
    expect(
      EmailValidator.validate('a@bb.cc'),
      equals(true),
      reason: 'Minimal email with 2-char SLD should pass',
    );
  });

  test('Validate domain with numeric subdomain and alpha TLD', () {
    expect(
      EmailValidator.validate('user@123.com'),
      equals(true),
      reason: 'Numeric subdomain with alphabetic TLD should be valid',
    );
    expect(
      EmailValidator.validate('user@123abc.com'),
      equals(true),
      reason: 'Alphanumeric subdomain should be valid',
    );
  });
}


================================================
FILE: tool/run_tests.sh
================================================
#!/bin/bash

set -e

DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )/.." && pwd )

cd "$DIR"

echo "Installing dependencies..."
dart pub get

echo "Checking formatting..."
dart format --output=none --set-exit-if-changed .

echo "Analyzing for warnings and type errors..."
dart analyze --fatal-infos

echo "Running tests..."
dart test

echo -e "\n\033[32m✓ OK\033[0m"
Download .txt
gitextract_vfl1p9sy/

├── .gitattributes
├── .github/
│   ├── FUNDING.yml
│   ├── copilot-instructions.md
│   └── workflows/
│       ├── ci.yml
│       ├── publish.yaml
│       ├── release.yml
│       ├── repo-assist.lock.yml
│       └── repo-assist.md
├── .gitignore
├── CHANGELOG.md
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── analysis_options.yaml
├── example/
│   ├── README.md
│   └── example.dart
├── lib/
│   └── email_validator.dart
├── pubspec.yaml
├── test/
│   └── email_validator_test.dart
└── tool/
    └── run_tests.sh
Download .txt
SYMBOL INDEX (18 symbols across 3 files)

FILE: example/example.dart
  function main (line 3) | void main()

FILE: lib/email_validator.dart
  type SubdomainType (line 6) | enum SubdomainType { None, Alphabetic, Numeric, AlphaNumeric }
  class EmailValidator (line 11) | class EmailValidator {
    method _isDigit (line 17) | bool _isDigit(String c)
    method _isLetter (line 21) | bool _isLetter(String c)
    method _isLetterOrDigit (line 26) | bool _isLetterOrDigit(String c)
    method _isAtom (line 30) | bool _isAtom(String c, bool allowInternational)
    method _isDomain (line 49) | bool _isDomain(String c, bool allowInternational)
    method _isDomainStart (line 74) | bool _isDomainStart(String c, bool allowInternational)
    method _skipAtom (line 101) | bool _skipAtom(String text, bool allowInternational)
    method _skipSubDomain (line 113) | bool _skipSubDomain(String text, bool allowInternational)
    method _skipDomain (line 137) | bool _skipDomain(
    method _skipQuoted (line 174) | bool _skipQuoted(String text, bool allowInternational)
    method _skipIPv4Literal (line 212) | bool _skipIPv4Literal(String text)
    method _isHexDigit (line 240) | bool _isHexDigit(String str)
    method _skipIPv6Literal (line 262) | bool _skipIPv6Literal(String text)
    method validate (line 334) | bool validate(

FILE: test/email_validator_test.dart
  function main (line 4) | void main()
Condensed preview — 20 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (167K chars).
[
  {
    "path": ".gitattributes",
    "chars": 63,
    "preview": ".github/workflows/*.lock.yml linguist-generated=true merge=ours"
  },
  {
    "path": ".github/FUNDING.yml",
    "chars": 641,
    "preview": "# These are supported funding model platforms\n\ngithub: fredeil\npatreon: # Replace with a single Patreon username\nopen_co"
  },
  {
    "path": ".github/copilot-instructions.md",
    "chars": 2094,
    "preview": "# Copilot Instructions\n\n## Commands\n\n```bash\ndart pub get          # install dependencies\ndart test             # run al"
  },
  {
    "path": ".github/workflows/ci.yml",
    "chars": 399,
    "preview": "name: CI\n\non:\n  push:\n    branches: [master]\n  pull_request:\n    branches: [master]\n\njobs:\n  test:\n    runs-on: ubuntu-l"
  },
  {
    "path": ".github/workflows/publish.yaml",
    "chars": 202,
    "preview": "name: Publish to pub.dev\n\non:\n  push:\n    tags:\n      - 'v[0-9]+.[0-9]+.[0-9]+*'\n\njobs:\n  publish:\n    permissions:\n    "
  },
  {
    "path": ".github/workflows/release.yml",
    "chars": 2253,
    "preview": "name: Release\n\non:\n  workflow_dispatch:\n    inputs:\n      version:\n        description: \"Version: package version and gi"
  },
  {
    "path": ".github/workflows/repo-assist.lock.yml",
    "chars": 96777,
    "preview": "#\n#    ___                   _   _      \n#   / _ \\                 | | (_)     \n#  | |_| | __ _  ___ _ __ | |_ _  ___ \n#"
  },
  {
    "path": ".github/workflows/repo-assist.md",
    "chars": 17294,
    "preview": "---\ndescription: |\n  A friendly repository assistant that runs daily to support contributors and maintainers.\n  Can also"
  },
  {
    "path": ".gitignore",
    "chars": 637,
    "preview": "# See https://www.dartlang.org/guides/libraries/private-files\n\n# Files and directories created by pub\n.dart_tool/\n.packa"
  },
  {
    "path": "CHANGELOG.md",
    "chars": 1053,
    "preview": "## 3.0.6\nBug fixes and performance improvements\n\n## 3.0.5\nBug fixes and performance improvements\n\n## 3.0.2\nBug fixes and"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 1845,
    "preview": "# Contributing to email-validator.dart\n\nThank you for your interest in contributing! This document explains how to set u"
  },
  {
    "path": "LICENSE",
    "chars": 1074,
    "preview": "MIT License\n\nCopyright (c) 2018 Fredrik Eilertsen\n\nPermission is hereby granted, free of charge, to any person obtaining"
  },
  {
    "path": "README.md",
    "chars": 2490,
    "preview": "# **Email Validator.dart**\n\n[![CI](https://github.com/fredeil/email-validator.dart/actions/workflows/ci.yml/badge.svg)]("
  },
  {
    "path": "analysis_options.yaml",
    "chars": 2520,
    "preview": "analyzer:\n  language:\n  strong-mode:\n    implicit-dynamic: false\n  errors:\n    missing_required_param: warning\n    missi"
  },
  {
    "path": "example/README.md",
    "chars": 209,
    "preview": "# Examples\n\n* See [example.dart](./example.dart) for a basic example\n* See [fredeil/flutter-email-validator](https://git"
  },
  {
    "path": "example/example.dart",
    "chars": 1327,
    "preview": "import 'package:email_validator/email_validator.dart';\n\nvoid main() {\n  // Basic validation — default: allowTopLevelDoma"
  },
  {
    "path": "lib/email_validator.dart",
    "chars": 10525,
    "preview": "library email_validator;\n\n/// The Type enum\n///\n/// The domain type is either None, Alphabetic, Numeric or AlphaNumeric\n"
  },
  {
    "path": "pubspec.yaml",
    "chars": 260,
    "preview": "name: email_validator\r\nversion: 3.0.6\n\r\nhomepage: https://github.com/fredeil/email-validator.dart\r\ndescription: A simple"
  },
  {
    "path": "test/email_validator_test.dart",
    "chars": 17410,
    "preview": "import 'package:email_validator/email_validator.dart';\nimport 'package:test/test.dart';\n\nvoid main() {\n  final List<Stri"
  },
  {
    "path": "tool/run_tests.sh",
    "chars": 363,
    "preview": "#!/bin/bash\n\nset -e\n\nDIR=$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )/..\" && pwd )\n\ncd \"$DIR\"\n\necho \"Installing dependencies."
  }
]

About this extraction

This page contains the full source code of the fredeil/email-validator.dart GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 20 files (155.7 KB), approximately 39.4k tokens, and a symbol index with 18 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!